2025-09-08 00:00:07.276784 | Job console starting 2025-09-08 00:00:07.307692 | Updating git repos 2025-09-08 00:00:07.740563 | Cloning repos into workspace 2025-09-08 00:00:07.925952 | Restoring repo states 2025-09-08 00:00:07.945789 | Merging changes 2025-09-08 00:00:07.945805 | Checking out repos 2025-09-08 00:00:08.303805 | Preparing playbooks 2025-09-08 00:00:09.024638 | Running Ansible setup 2025-09-08 00:00:14.733483 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-08 00:00:16.605523 | 2025-09-08 00:00:16.605677 | PLAY [Base pre] 2025-09-08 00:00:16.625920 | 2025-09-08 00:00:16.626039 | TASK [Setup log path fact] 2025-09-08 00:00:16.646738 | orchestrator | ok 2025-09-08 00:00:16.684947 | 2025-09-08 00:00:16.685097 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-08 00:00:16.774953 | orchestrator | ok 2025-09-08 00:00:16.831932 | 2025-09-08 00:00:16.832064 | TASK [emit-job-header : Print job information] 2025-09-08 00:00:16.954202 | # Job Information 2025-09-08 00:00:16.954402 | Ansible Version: 2.16.14 2025-09-08 00:00:16.954494 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-08 00:00:16.954533 | Pipeline: periodic-midnight 2025-09-08 00:00:16.954557 | Executor: 521e9411259a 2025-09-08 00:00:16.954578 | Triggered by: https://github.com/osism/testbed 2025-09-08 00:00:16.954600 | Event ID: b884a41641f4497aaf5d5aa1027ab29b 2025-09-08 00:00:16.988746 | 2025-09-08 00:00:16.988875 | LOOP [emit-job-header : Print node information] 2025-09-08 00:00:17.368340 | orchestrator | ok: 2025-09-08 00:00:17.368539 | orchestrator | # Node Information 2025-09-08 00:00:17.368574 | orchestrator | Inventory Hostname: orchestrator 2025-09-08 00:00:17.368600 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-08 00:00:17.368634 | orchestrator | Username: zuul-testbed05 2025-09-08 00:00:17.368655 | orchestrator | Distro: Debian 12.12 2025-09-08 00:00:17.368679 | orchestrator | Provider: static-testbed 2025-09-08 00:00:17.368700 | orchestrator | Region: 2025-09-08 00:00:17.368721 | orchestrator | Label: testbed-orchestrator 2025-09-08 00:00:17.368741 | orchestrator | Product Name: OpenStack Nova 2025-09-08 00:00:17.368761 | orchestrator | Interface IP: 81.163.193.140 2025-09-08 00:00:17.402773 | 2025-09-08 00:00:17.405408 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-08 00:00:18.942656 | orchestrator -> localhost | changed 2025-09-08 00:00:18.948987 | 2025-09-08 00:00:18.949077 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-08 00:00:21.117677 | orchestrator -> localhost | changed 2025-09-08 00:00:21.129460 | 2025-09-08 00:00:21.129556 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-08 00:00:21.608598 | orchestrator -> localhost | ok 2025-09-08 00:00:21.614348 | 2025-09-08 00:00:21.614447 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-08 00:00:21.661986 | orchestrator | ok 2025-09-08 00:00:21.699561 | orchestrator | included: /var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-08 00:00:21.732126 | 2025-09-08 00:00:21.732226 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-08 00:00:24.662372 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-08 00:00:24.662544 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/work/8b726dec01db4467b34e0a590ec8733d_id_rsa 2025-09-08 00:00:24.662578 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/work/8b726dec01db4467b34e0a590ec8733d_id_rsa.pub 2025-09-08 00:00:24.662600 | orchestrator -> localhost | The key fingerprint is: 2025-09-08 00:00:24.662647 | orchestrator -> localhost | SHA256:4UrARyy8XkjI1DU97TvzAchkCgZuGP59OY0twnhs4eo zuul-build-sshkey 2025-09-08 00:00:24.662668 | orchestrator -> localhost | The key's randomart image is: 2025-09-08 00:00:24.662697 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-08 00:00:24.662715 | orchestrator -> localhost | |.oo= o+. . | 2025-09-08 00:00:24.662734 | orchestrator -> localhost | |.+o.B...= . | 2025-09-08 00:00:24.662750 | orchestrator -> localhost | |..ooo*.=.+ | 2025-09-08 00:00:24.662767 | orchestrator -> localhost | | .. Oo+.B.o | 2025-09-08 00:00:24.662783 | orchestrator -> localhost | | + X.*So o | 2025-09-08 00:00:24.662804 | orchestrator -> localhost | | =.o.o + . | 2025-09-08 00:00:24.662822 | orchestrator -> localhost | | . . + . | 2025-09-08 00:00:24.662862 | orchestrator -> localhost | | . . | 2025-09-08 00:00:24.662881 | orchestrator -> localhost | | E | 2025-09-08 00:00:24.662897 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-08 00:00:24.662940 | orchestrator -> localhost | ok: Runtime: 0:00:01.712175 2025-09-08 00:00:24.668855 | 2025-09-08 00:00:24.668940 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-08 00:00:24.716485 | orchestrator | ok 2025-09-08 00:00:24.734163 | orchestrator | included: /var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-08 00:00:24.753643 | 2025-09-08 00:00:24.753745 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-08 00:00:24.796539 | orchestrator | skipping: Conditional result was False 2025-09-08 00:00:24.802994 | 2025-09-08 00:00:24.803086 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-08 00:00:25.762500 | orchestrator | changed 2025-09-08 00:00:25.767704 | 2025-09-08 00:00:25.767784 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-08 00:00:26.064902 | orchestrator | ok 2025-09-08 00:00:26.069863 | 2025-09-08 00:00:26.069943 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-08 00:00:28.625575 | orchestrator | ok 2025-09-08 00:00:28.630660 | 2025-09-08 00:00:28.630740 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-08 00:00:29.093925 | orchestrator | ok 2025-09-08 00:00:29.101019 | 2025-09-08 00:00:29.101103 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-08 00:00:29.131015 | orchestrator | skipping: Conditional result was False 2025-09-08 00:00:29.136688 | 2025-09-08 00:00:29.136772 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-08 00:00:30.106555 | orchestrator -> localhost | changed 2025-09-08 00:00:30.118320 | 2025-09-08 00:00:30.118412 | TASK [add-build-sshkey : Add back temp key] 2025-09-08 00:00:30.848672 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/work/8b726dec01db4467b34e0a590ec8733d_id_rsa (zuul-build-sshkey) 2025-09-08 00:00:30.848856 | orchestrator -> localhost | ok: Runtime: 0:00:00.031537 2025-09-08 00:00:30.854702 | 2025-09-08 00:00:30.855503 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-08 00:00:31.391745 | orchestrator | ok 2025-09-08 00:00:31.396656 | 2025-09-08 00:00:31.396737 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-08 00:00:31.419124 | orchestrator | skipping: Conditional result was False 2025-09-08 00:00:31.459842 | 2025-09-08 00:00:31.459941 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-08 00:00:31.871853 | orchestrator | ok 2025-09-08 00:00:31.881464 | 2025-09-08 00:00:31.881591 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-08 00:00:31.911585 | orchestrator | ok 2025-09-08 00:00:31.918591 | 2025-09-08 00:00:31.918717 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-08 00:00:32.265285 | orchestrator -> localhost | ok 2025-09-08 00:00:32.272276 | 2025-09-08 00:00:32.272362 | TASK [validate-host : Collect information about the host] 2025-09-08 00:00:33.499308 | orchestrator | ok 2025-09-08 00:00:33.542659 | 2025-09-08 00:00:33.542773 | TASK [validate-host : Sanitize hostname] 2025-09-08 00:00:33.661363 | orchestrator | ok 2025-09-08 00:00:33.671491 | 2025-09-08 00:00:33.671580 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-08 00:00:35.018094 | orchestrator -> localhost | changed 2025-09-08 00:00:35.023395 | 2025-09-08 00:00:35.023486 | TASK [validate-host : Collect information about zuul worker] 2025-09-08 00:00:35.571016 | orchestrator | ok 2025-09-08 00:00:35.580444 | 2025-09-08 00:00:35.580566 | TASK [validate-host : Write out all zuul information for each host] 2025-09-08 00:00:36.229363 | orchestrator -> localhost | changed 2025-09-08 00:00:36.237752 | 2025-09-08 00:00:36.237832 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-08 00:00:36.509089 | orchestrator | ok 2025-09-08 00:00:36.514007 | 2025-09-08 00:00:36.514085 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-08 00:01:33.079062 | orchestrator | changed: 2025-09-08 00:01:33.079298 | orchestrator | .d..t...... src/ 2025-09-08 00:01:33.079334 | orchestrator | .d..t...... src/github.com/ 2025-09-08 00:01:33.079360 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-08 00:01:33.079383 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-08 00:01:33.079404 | orchestrator | RedHat.yml 2025-09-08 00:01:33.092660 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-08 00:01:33.092678 | orchestrator | RedHat.yml 2025-09-08 00:01:33.092745 | orchestrator | = 1.53.0"... 2025-09-08 00:01:51.206161 | orchestrator | 00:01:51.205 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-08 00:01:51.458129 | orchestrator | 00:01:51.457 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-08 00:01:51.897492 | orchestrator | 00:01:51.897 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-08 00:01:52.498572 | orchestrator | 00:01:52.498 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-08 00:01:53.441712 | orchestrator | 00:01:53.441 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-08 00:01:54.031654 | orchestrator | 00:01:54.031 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-08 00:01:54.932238 | orchestrator | 00:01:54.931 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-08 00:01:54.932369 | orchestrator | 00:01:54.932 STDOUT terraform: Providers are signed by their developers. 2025-09-08 00:01:54.932389 | orchestrator | 00:01:54.932 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-08 00:01:54.932402 | orchestrator | 00:01:54.932 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-08 00:01:54.932414 | orchestrator | 00:01:54.932 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-08 00:01:54.932442 | orchestrator | 00:01:54.932 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-08 00:01:54.932460 | orchestrator | 00:01:54.932 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-08 00:01:54.932476 | orchestrator | 00:01:54.932 STDOUT terraform: you run "tofu init" in the future. 2025-09-08 00:01:54.932491 | orchestrator | 00:01:54.932 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-08 00:01:54.932616 | orchestrator | 00:01:54.932 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-08 00:01:54.932713 | orchestrator | 00:01:54.932 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-08 00:01:54.932726 | orchestrator | 00:01:54.932 STDOUT terraform: should now work. 2025-09-08 00:01:54.932742 | orchestrator | 00:01:54.932 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-08 00:01:54.932754 | orchestrator | 00:01:54.932 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-08 00:01:54.932767 | orchestrator | 00:01:54.932 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-08 00:01:55.047992 | orchestrator | 00:01:55.046 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-08 00:01:55.048221 | orchestrator | 00:01:55.046 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-08 00:01:55.315968 | orchestrator | 00:01:55.314 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-08 00:01:55.317004 | orchestrator | 00:01:55.314 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-08 00:01:55.317379 | orchestrator | 00:01:55.314 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-08 00:01:55.317473 | orchestrator | 00:01:55.314 STDOUT terraform: for this configuration. 2025-09-08 00:01:55.458199 | orchestrator | 00:01:55.457 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-08 00:01:55.458276 | orchestrator | 00:01:55.458 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-08 00:01:55.600154 | orchestrator | 00:01:55.598 STDOUT terraform: ci.auto.tfvars 2025-09-08 00:01:55.600234 | orchestrator | 00:01:55.598 STDOUT terraform: default_custom.tf 2025-09-08 00:01:55.737203 | orchestrator | 00:01:55.737 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-08 00:01:57.330141 | orchestrator | 00:01:57.329 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-08 00:01:57.881085 | orchestrator | 00:01:57.880 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-08 00:01:58.121643 | orchestrator | 00:01:58.121 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-08 00:01:58.121736 | orchestrator | 00:01:58.121 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-08 00:01:58.121744 | orchestrator | 00:01:58.121 STDOUT terraform:  + create 2025-09-08 00:01:58.121749 | orchestrator | 00:01:58.121 STDOUT terraform:  <= read (data resources) 2025-09-08 00:01:58.121755 | orchestrator | 00:01:58.121 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-08 00:01:58.121898 | orchestrator | 00:01:58.121 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-08 00:01:58.121962 | orchestrator | 00:01:58.121 STDOUT terraform:  # (config refers to values not yet known) 2025-09-08 00:01:58.121973 | orchestrator | 00:01:58.121 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-08 00:01:58.121979 | orchestrator | 00:01:58.121 STDOUT terraform:  + checksum = (known after apply) 2025-09-08 00:01:58.122032 | orchestrator | 00:01:58.121 STDOUT terraform:  + created_at = (known after apply) 2025-09-08 00:01:58.122142 | orchestrator | 00:01:58.122 STDOUT terraform:  + file = (known after apply) 2025-09-08 00:01:58.122149 | orchestrator | 00:01:58.122 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.122155 | orchestrator | 00:01:58.122 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.122193 | orchestrator | 00:01:58.122 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-08 00:01:58.122245 | orchestrator | 00:01:58.122 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-08 00:01:58.122256 | orchestrator | 00:01:58.122 STDOUT terraform:  + most_recent = true 2025-09-08 00:01:58.122262 | orchestrator | 00:01:58.122 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.122296 | orchestrator | 00:01:58.122 STDOUT terraform:  + protected = (known after apply) 2025-09-08 00:01:58.122308 | orchestrator | 00:01:58.122 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.122349 | orchestrator | 00:01:58.122 STDOUT terraform:  + schema = (known after apply) 2025-09-08 00:01:58.122380 | orchestrator | 00:01:58.122 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-08 00:01:58.122440 | orchestrator | 00:01:58.122 STDOUT terraform:  + tags = (known after apply) 2025-09-08 00:01:58.122446 | orchestrator | 00:01:58.122 STDOUT terraform:  + updated_at = (known after apply) 2025-09-08 00:01:58.122450 | orchestrator | 00:01:58.122 STDOUT terraform:  } 2025-09-08 00:01:58.122600 | orchestrator | 00:01:58.122 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-08 00:01:58.122608 | orchestrator | 00:01:58.122 STDOUT terraform:  # (config refers to values not yet known) 2025-09-08 00:01:58.122649 | orchestrator | 00:01:58.122 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-08 00:01:58.122721 | orchestrator | 00:01:58.122 STDOUT terraform:  + checksum = (known after apply) 2025-09-08 00:01:58.122727 | orchestrator | 00:01:58.122 STDOUT terraform:  + created_at = (known after apply) 2025-09-08 00:01:58.122733 | orchestrator | 00:01:58.122 STDOUT terraform:  + file = (known after apply) 2025-09-08 00:01:58.122782 | orchestrator | 00:01:58.122 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.122790 | orchestrator | 00:01:58.122 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.122823 | orchestrator | 00:01:58.122 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-08 00:01:58.122835 | orchestrator | 00:01:58.122 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-08 00:01:58.122868 | orchestrator | 00:01:58.122 STDOUT terraform:  + most_recent = true 2025-09-08 00:01:58.122897 | orchestrator | 00:01:58.122 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.122906 | orchestrator | 00:01:58.122 STDOUT terraform:  + protected = (known after apply) 2025-09-08 00:01:58.122969 | orchestrator | 00:01:58.122 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.122977 | orchestrator | 00:01:58.122 STDOUT terraform:  + schema = (known after apply) 2025-09-08 00:01:58.122984 | orchestrator | 00:01:58.122 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-08 00:01:58.123029 | orchestrator | 00:01:58.122 STDOUT terraform:  + tags = (known after apply) 2025-09-08 00:01:58.123038 | orchestrator | 00:01:58.123 STDOUT terraform:  + updated_at = (known after apply) 2025-09-08 00:01:58.123098 | orchestrator | 00:01:58.123 STDOUT terraform:  } 2025-09-08 00:01:58.123226 | orchestrator | 00:01:58.123 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-08 00:01:58.123303 | orchestrator | 00:01:58.123 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-08 00:01:58.123310 | orchestrator | 00:01:58.123 STDOUT terraform:  + content = (known after apply) 2025-09-08 00:01:58.123318 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:58.123374 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:58.123421 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:58.123428 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:58.123639 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:58.123650 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:58.123655 | orchestrator | 00:01:58.123 STDOUT terraform:  + directory_permission = "0777" 2025-09-08 00:01:58.123658 | orchestrator | 00:01:58.123 STDOUT terraform:  + file_permission = "0644" 2025-09-08 00:01:58.123662 | orchestrator | 00:01:58.123 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-08 00:01:58.123666 | orchestrator | 00:01:58.123 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.123670 | orchestrator | 00:01:58.123 STDOUT terraform:  } 2025-09-08 00:01:58.123745 | orchestrator | 00:01:58.123 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-08 00:01:58.123753 | orchestrator | 00:01:58.123 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-08 00:01:58.123821 | orchestrator | 00:01:58.123 STDOUT terraform:  + content = (known after apply) 2025-09-08 00:01:58.123831 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:58.123859 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:58.123905 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:58.123918 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:58.123989 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:58.123995 | orchestrator | 00:01:58.123 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:58.124001 | orchestrator | 00:01:58.123 STDOUT terraform:  + directory_permission = "0777" 2025-09-08 00:01:58.124046 | orchestrator | 00:01:58.123 STDOUT terraform:  + file_permission = "0644" 2025-09-08 00:01:58.124074 | orchestrator | 00:01:58.124 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-08 00:01:58.124122 | orchestrator | 00:01:58.124 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.124130 | orchestrator | 00:01:58.124 STDOUT terraform:  } 2025-09-08 00:01:58.124276 | orchestrator | 00:01:58.124 STDOUT terraform:  # local_file.inventory will be created 2025-09-08 00:01:58.124305 | orchestrator | 00:01:58.124 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-08 00:01:58.124343 | orchestrator | 00:01:58.124 STDOUT terraform:  + content = (known after apply) 2025-09-08 00:01:58.124409 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:58.124419 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:58.124447 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:58.124483 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:58.124513 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:58.124612 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:58.124618 | orchestrator | 00:01:58.124 STDOUT terraform:  + directory_permission = "0777" 2025-09-08 00:01:58.124623 | orchestrator | 00:01:58.124 STDOUT terraform:  + file_permission = "0644" 2025-09-08 00:01:58.124634 | orchestrator | 00:01:58.124 STDOUT terraform:  + filename = "inventory.ci" 2025-09-08 00:01:58.124639 | orchestrator | 00:01:58.124 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.124664 | orchestrator | 00:01:58.124 STDOUT terraform:  } 2025-09-08 00:01:58.124872 | orchestrator | 00:01:58.124 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-08 00:01:58.124878 | orchestrator | 00:01:58.124 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-08 00:01:58.124882 | orchestrator | 00:01:58.124 STDOUT terraform:  + content = (sensitive value) 2025-09-08 00:01:58.124886 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:58.124892 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:58.124936 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:58.124991 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:58.125001 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:58.125071 | orchestrator | 00:01:58.124 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:58.125082 | orchestrator | 00:01:58.125 STDOUT terraform:  + directory_permission = "0700" 2025-09-08 00:01:58.125085 | orchestrator | 00:01:58.125 STDOUT terraform:  + file_permission = "0600" 2025-09-08 00:01:58.125091 | orchestrator | 00:01:58.125 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-08 00:01:58.125148 | orchestrator | 00:01:58.125 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.125154 | orchestrator | 00:01:58.125 STDOUT terraform:  } 2025-09-08 00:01:58.125219 | orchestrator | 00:01:58.125 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-08 00:01:58.125250 | orchestrator | 00:01:58.125 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-08 00:01:58.125257 | orchestrator | 00:01:58.125 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.125282 | orchestrator | 00:01:58.125 STDOUT terraform:  } 2025-09-08 00:01:58.125423 | orchestrator | 00:01:58.125 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-08 00:01:58.125471 | orchestrator | 00:01:58.125 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-08 00:01:58.125511 | orchestrator | 00:01:58.125 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.125518 | orchestrator | 00:01:58.125 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.125568 | orchestrator | 00:01:58.125 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.125603 | orchestrator | 00:01:58.125 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.125635 | orchestrator | 00:01:58.125 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.125680 | orchestrator | 00:01:58.125 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-08 00:01:58.125775 | orchestrator | 00:01:58.125 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.125781 | orchestrator | 00:01:58.125 STDOUT terraform:  + size = 80 2025-09-08 00:01:58.125785 | orchestrator | 00:01:58.125 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.125789 | orchestrator | 00:01:58.125 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.125792 | orchestrator | 00:01:58.125 STDOUT terraform:  } 2025-09-08 00:01:58.125899 | orchestrator | 00:01:58.125 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-08 00:01:58.125957 | orchestrator | 00:01:58.125 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:58.125964 | orchestrator | 00:01:58.125 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.126031 | orchestrator | 00:01:58.125 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.126041 | orchestrator | 00:01:58.125 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.126087 | orchestrator | 00:01:58.126 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.126150 | orchestrator | 00:01:58.126 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.126223 | orchestrator | 00:01:58.126 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-08 00:01:58.126229 | orchestrator | 00:01:58.126 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.126235 | orchestrator | 00:01:58.126 STDOUT terraform:  + size = 80 2025-09-08 00:01:58.126278 | orchestrator | 00:01:58.126 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.126285 | orchestrator | 00:01:58.126 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.126315 | orchestrator | 00:01:58.126 STDOUT terraform:  } 2025-09-08 00:01:58.126470 | orchestrator | 00:01:58.126 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-08 00:01:58.126509 | orchestrator | 00:01:58.126 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:58.126582 | orchestrator | 00:01:58.126 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.126593 | orchestrator | 00:01:58.126 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.126599 | orchestrator | 00:01:58.126 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.126643 | orchestrator | 00:01:58.126 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.126703 | orchestrator | 00:01:58.126 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.126711 | orchestrator | 00:01:58.126 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-08 00:01:58.126838 | orchestrator | 00:01:58.126 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.126844 | orchestrator | 00:01:58.126 STDOUT terraform:  + size = 80 2025-09-08 00:01:58.126848 | orchestrator | 00:01:58.126 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.126852 | orchestrator | 00:01:58.126 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.126856 | orchestrator | 00:01:58.126 STDOUT terraform:  } 2025-09-08 00:01:58.126967 | orchestrator | 00:01:58.126 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-08 00:01:58.127012 | orchestrator | 00:01:58.126 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:58.127070 | orchestrator | 00:01:58.127 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.127085 | orchestrator | 00:01:58.127 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.127175 | orchestrator | 00:01:58.127 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.127182 | orchestrator | 00:01:58.127 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.127186 | orchestrator | 00:01:58.127 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.127220 | orchestrator | 00:01:58.127 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-08 00:01:58.127262 | orchestrator | 00:01:58.127 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.127267 | orchestrator | 00:01:58.127 STDOUT terraform:  + size = 80 2025-09-08 00:01:58.127296 | orchestrator | 00:01:58.127 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.127303 | orchestrator | 00:01:58.127 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.127396 | orchestrator | 00:01:58.127 STDOUT terraform:  } 2025-09-08 00:01:58.127525 | orchestrator | 00:01:58.127 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-08 00:01:58.127533 | orchestrator | 00:01:58.127 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:58.127568 | orchestrator | 00:01:58.127 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.127614 | orchestrator | 00:01:58.127 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.127626 | orchestrator | 00:01:58.127 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.127675 | orchestrator | 00:01:58.127 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.127731 | orchestrator | 00:01:58.127 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.127744 | orchestrator | 00:01:58.127 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-08 00:01:58.127779 | orchestrator | 00:01:58.127 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.127786 | orchestrator | 00:01:58.127 STDOUT terraform:  + size = 80 2025-09-08 00:01:58.127813 | orchestrator | 00:01:58.127 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.127829 | orchestrator | 00:01:58.127 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.127866 | orchestrator | 00:01:58.127 STDOUT terraform:  } 2025-09-08 00:01:58.128016 | orchestrator | 00:01:58.127 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-08 00:01:58.128028 | orchestrator | 00:01:58.127 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:58.128059 | orchestrator | 00:01:58.128 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.128125 | orchestrator | 00:01:58.128 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.128133 | orchestrator | 00:01:58.128 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.128182 | orchestrator | 00:01:58.128 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.128235 | orchestrator | 00:01:58.128 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.128264 | orchestrator | 00:01:58.128 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-08 00:01:58.128296 | orchestrator | 00:01:58.128 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.128305 | orchestrator | 00:01:58.128 STDOUT terraform:  + size = 80 2025-09-08 00:01:58.128349 | orchestrator | 00:01:58.128 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.128356 | orchestrator | 00:01:58.128 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.128362 | orchestrator | 00:01:58.128 STDOUT terraform:  } 2025-09-08 00:01:58.128515 | orchestrator | 00:01:58.128 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-08 00:01:58.128543 | orchestrator | 00:01:58.128 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:58.128627 | orchestrator | 00:01:58.128 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.128633 | orchestrator | 00:01:58.128 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.128638 | orchestrator | 00:01:58.128 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.128680 | orchestrator | 00:01:58.128 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.128757 | orchestrator | 00:01:58.128 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.128768 | orchestrator | 00:01:58.128 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-08 00:01:58.128777 | orchestrator | 00:01:58.128 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.128805 | orchestrator | 00:01:58.128 STDOUT terraform:  + size = 80 2025-09-08 00:01:58.128822 | orchestrator | 00:01:58.128 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.128852 | orchestrator | 00:01:58.128 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.128868 | orchestrator | 00:01:58.128 STDOUT terraform:  } 2025-09-08 00:01:58.128993 | orchestrator | 00:01:58.128 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-08 00:01:58.129056 | orchestrator | 00:01:58.128 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.129067 | orchestrator | 00:01:58.129 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.129073 | orchestrator | 00:01:58.129 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.129139 | orchestrator | 00:01:58.129 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.129225 | orchestrator | 00:01:58.129 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.129231 | orchestrator | 00:01:58.129 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-08 00:01:58.129237 | orchestrator | 00:01:58.129 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.129242 | orchestrator | 00:01:58.129 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.129275 | orchestrator | 00:01:58.129 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.129329 | orchestrator | 00:01:58.129 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.129340 | orchestrator | 00:01:58.129 STDOUT terraform:  } 2025-09-08 00:01:58.129417 | orchestrator | 00:01:58.129 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-08 00:01:58.129502 | orchestrator | 00:01:58.129 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.129507 | orchestrator | 00:01:58.129 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.129513 | orchestrator | 00:01:58.129 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.129561 | orchestrator | 00:01:58.129 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.129700 | orchestrator | 00:01:58.129 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.129706 | orchestrator | 00:01:58.129 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-08 00:01:58.129709 | orchestrator | 00:01:58.129 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.129714 | orchestrator | 00:01:58.129 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.129717 | orchestrator | 00:01:58.129 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.129723 | orchestrator | 00:01:58.129 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.129727 | orchestrator | 00:01:58.129 STDOUT terraform:  } 2025-09-08 00:01:58.136984 | orchestrator | 00:01:58.136 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-08 00:01:58.137021 | orchestrator | 00:01:58.136 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.137029 | orchestrator | 00:01:58.136 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.137046 | orchestrator | 00:01:58.137 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.137088 | orchestrator | 00:01:58.137 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.137134 | orchestrator | 00:01:58.137 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.137174 | orchestrator | 00:01:58.137 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-08 00:01:58.137214 | orchestrator | 00:01:58.137 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.137240 | orchestrator | 00:01:58.137 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.137279 | orchestrator | 00:01:58.137 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.137287 | orchestrator | 00:01:58.137 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.137310 | orchestrator | 00:01:58.137 STDOUT terraform:  } 2025-09-08 00:01:58.137356 | orchestrator | 00:01:58.137 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-08 00:01:58.137402 | orchestrator | 00:01:58.137 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.137440 | orchestrator | 00:01:58.137 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.137464 | orchestrator | 00:01:58.137 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.137500 | orchestrator | 00:01:58.137 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.137537 | orchestrator | 00:01:58.137 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.137577 | orchestrator | 00:01:58.137 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-08 00:01:58.137611 | orchestrator | 00:01:58.137 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.137634 | orchestrator | 00:01:58.137 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.137659 | orchestrator | 00:01:58.137 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.137686 | orchestrator | 00:01:58.137 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.137693 | orchestrator | 00:01:58.137 STDOUT terraform:  } 2025-09-08 00:01:58.137742 | orchestrator | 00:01:58.137 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-08 00:01:58.137784 | orchestrator | 00:01:58.137 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.137821 | orchestrator | 00:01:58.137 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.137845 | orchestrator | 00:01:58.137 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.137881 | orchestrator | 00:01:58.137 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.137916 | orchestrator | 00:01:58.137 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.137954 | orchestrator | 00:01:58.137 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-08 00:01:58.137992 | orchestrator | 00:01:58.137 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.138005 | orchestrator | 00:01:58.137 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.138056 | orchestrator | 00:01:58.138 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.138080 | orchestrator | 00:01:58.138 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.138088 | orchestrator | 00:01:58.138 STDOUT terraform:  } 2025-09-08 00:01:58.138162 | orchestrator | 00:01:58.138 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-08 00:01:58.138205 | orchestrator | 00:01:58.138 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.138241 | orchestrator | 00:01:58.138 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.138267 | orchestrator | 00:01:58.138 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.138305 | orchestrator | 00:01:58.138 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.138343 | orchestrator | 00:01:58.138 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.138385 | orchestrator | 00:01:58.138 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-08 00:01:58.138422 | orchestrator | 00:01:58.138 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.138448 | orchestrator | 00:01:58.138 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.138470 | orchestrator | 00:01:58.138 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.138496 | orchestrator | 00:01:58.138 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.138503 | orchestrator | 00:01:58.138 STDOUT terraform:  } 2025-09-08 00:01:58.138633 | orchestrator | 00:01:58.138 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-08 00:01:58.138640 | orchestrator | 00:01:58.138 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.138644 | orchestrator | 00:01:58.138 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.138650 | orchestrator | 00:01:58.138 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.138717 | orchestrator | 00:01:58.138 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.138725 | orchestrator | 00:01:58.138 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.138789 | orchestrator | 00:01:58.138 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-08 00:01:58.138795 | orchestrator | 00:01:58.138 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.138801 | orchestrator | 00:01:58.138 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.138852 | orchestrator | 00:01:58.138 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.138858 | orchestrator | 00:01:58.138 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.138869 | orchestrator | 00:01:58.138 STDOUT terraform:  } 2025-09-08 00:01:58.138919 | orchestrator | 00:01:58.138 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-08 00:01:58.138951 | orchestrator | 00:01:58.138 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.139036 | orchestrator | 00:01:58.138 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.139043 | orchestrator | 00:01:58.138 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.139047 | orchestrator | 00:01:58.138 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.139135 | orchestrator | 00:01:58.139 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.139141 | orchestrator | 00:01:58.139 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-08 00:01:58.139147 | orchestrator | 00:01:58.139 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.139153 | orchestrator | 00:01:58.139 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.139193 | orchestrator | 00:01:58.139 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.139200 | orchestrator | 00:01:58.139 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.139208 | orchestrator | 00:01:58.139 STDOUT terraform:  } 2025-09-08 00:01:58.139333 | orchestrator | 00:01:58.139 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-08 00:01:58.139340 | orchestrator | 00:01:58.139 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:58.139347 | orchestrator | 00:01:58.139 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:58.139372 | orchestrator | 00:01:58.139 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.139423 | orchestrator | 00:01:58.139 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.139431 | orchestrator | 00:01:58.139 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:58.139506 | orchestrator | 00:01:58.139 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-08 00:01:58.139512 | orchestrator | 00:01:58.139 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.139518 | orchestrator | 00:01:58.139 STDOUT terraform:  + size = 20 2025-09-08 00:01:58.139587 | orchestrator | 00:01:58.139 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:58.139593 | orchestrator | 00:01:58.139 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:58.139597 | orchestrator | 00:01:58.139 STDOUT terraform:  } 2025-09-08 00:01:58.139628 | orchestrator | 00:01:58.139 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-08 00:01:58.139701 | orchestrator | 00:01:58.139 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-08 00:01:58.139712 | orchestrator | 00:01:58.139 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:58.139779 | orchestrator | 00:01:58.139 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:58.139794 | orchestrator | 00:01:58.139 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:58.139801 | orchestrator | 00:01:58.139 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.139809 | orchestrator | 00:01:58.139 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.139841 | orchestrator | 00:01:58.139 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:58.139894 | orchestrator | 00:01:58.139 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:58.139903 | orchestrator | 00:01:58.139 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:58.139934 | orchestrator | 00:01:58.139 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-08 00:01:58.140085 | orchestrator | 00:01:58.139 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:58.140091 | orchestrator | 00:01:58.139 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:58.140095 | orchestrator | 00:01:58.139 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.140099 | orchestrator | 00:01:58.140 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.140103 | orchestrator | 00:01:58.140 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:58.140149 | orchestrator | 00:01:58.140 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:58.140158 | orchestrator | 00:01:58.140 STDOUT terraform:  + name = "testbed-manager" 2025-09-08 00:01:58.140196 | orchestrator | 00:01:58.140 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:58.140243 | orchestrator | 00:01:58.140 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.140297 | orchestrator | 00:01:58.140 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:58.140408 | orchestrator | 00:01:58.140 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:58.140506 | orchestrator | 00:01:58.140 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:58.140531 | orchestrator | 00:01:58.140 STDOUT terraform:  + user_data = (sensitive value) 2025-09-08 00:01:58.140535 | orchestrator | 00:01:58.140 STDOUT terraform:  + block_device { 2025-09-08 00:01:58.140539 | orchestrator | 00:01:58.140 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:58.140543 | orchestrator | 00:01:58.140 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:58.140648 | orchestrator | 00:01:58.140 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:58.140682 | orchestrator | 00:01:58.140 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:58.140770 | orchestrator | 00:01:58.140 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:58.140775 | orchestrator | 00:01:58.140 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.140861 | orchestrator | 00:01:58.140 STDOUT terraform:  } 2025-09-08 00:01:58.140866 | orchestrator | 00:01:58.140 STDOUT terraform:  + network { 2025-09-08 00:01:58.140874 | orchestrator | 00:01:58.140 STDOUT terraform:  + access_network = false 2025-09-08 00:01:58.140878 | orchestrator | 00:01:58.140 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:58.140881 | orchestrator | 00:01:58.140 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:58.140885 | orchestrator | 00:01:58.140 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:58.140894 | orchestrator | 00:01:58.140 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.140898 | orchestrator | 00:01:58.140 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:58.140902 | orchestrator | 00:01:58.140 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.140905 | orchestrator | 00:01:58.140 STDOUT terraform:  } 2025-09-08 00:01:58.140910 | orchestrator | 00:01:58.140 STDOUT terraform:  } 2025-09-08 00:01:58.140916 | orchestrator | 00:01:58.140 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-08 00:01:58.140920 | orchestrator | 00:01:58.140 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:58.140924 | orchestrator | 00:01:58.140 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:58.140931 | orchestrator | 00:01:58.140 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:58.140935 | orchestrator | 00:01:58.140 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:58.140938 | orchestrator | 00:01:58.140 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.140942 | orchestrator | 00:01:58.140 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.140948 | orchestrator | 00:01:58.140 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:58.141016 | orchestrator | 00:01:58.140 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:58.141134 | orchestrator | 00:01:58.140 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:58.141163 | orchestrator | 00:01:58.140 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:58.141189 | orchestrator | 00:01:58.141 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:58.141194 | orchestrator | 00:01:58.141 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:58.141198 | orchestrator | 00:01:58.141 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.141224 | orchestrator | 00:01:58.141 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.141230 | orchestrator | 00:01:58.141 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:58.141234 | orchestrator | 00:01:58.141 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:58.141238 | orchestrator | 00:01:58.141 STDOUT terraform:  + name = "testbed-node-0" 2025-09-08 00:01:58.141301 | orchestrator | 00:01:58.141 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:58.141330 | orchestrator | 00:01:58.141 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.141423 | orchestrator | 00:01:58.141 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:58.141495 | orchestrator | 00:01:58.141 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:58.141499 | orchestrator | 00:01:58.141 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:58.141503 | orchestrator | 00:01:58.141 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:58.141507 | orchestrator | 00:01:58.141 STDOUT terraform:  + block_device { 2025-09-08 00:01:58.141520 | orchestrator | 00:01:58.141 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:58.141524 | orchestrator | 00:01:58.141 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:58.141528 | orchestrator | 00:01:58.141 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:58.141531 | orchestrator | 00:01:58.141 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:58.141535 | orchestrator | 00:01:58.141 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:58.141560 | orchestrator | 00:01:58.141 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.141725 | orchestrator | 00:01:58.141 STDOUT terraform:  } 2025-09-08 00:01:58.141735 | orchestrator | 00:01:58.141 STDOUT terraform:  + network { 2025-09-08 00:01:58.141739 | orchestrator | 00:01:58.141 STDOUT terraform:  + access_network = false 2025-09-08 00:01:58.141742 | orchestrator | 00:01:58.141 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:58.141748 | orchestrator | 00:01:58.141 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:58.141752 | orchestrator | 00:01:58.141 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:58.141756 | orchestrator | 00:01:58.141 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.141759 | orchestrator | 00:01:58.141 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:58.141763 | orchestrator | 00:01:58.141 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.141769 | orchestrator | 00:01:58.141 STDOUT terraform:  } 2025-09-08 00:01:58.141773 | orchestrator | 00:01:58.141 STDOUT terraform:  } 2025-09-08 00:01:58.141832 | orchestrator | 00:01:58.141 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-08 00:01:58.141858 | orchestrator | 00:01:58.141 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:58.141944 | orchestrator | 00:01:58.141 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:58.141948 | orchestrator | 00:01:58.141 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:58.141952 | orchestrator | 00:01:58.141 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:58.142030 | orchestrator | 00:01:58.141 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.142037 | orchestrator | 00:01:58.141 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.142043 | orchestrator | 00:01:58.141 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:58.142137 | orchestrator | 00:01:58.142 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:58.142150 | orchestrator | 00:01:58.142 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:58.142154 | orchestrator | 00:01:58.142 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:58.142160 | orchestrator | 00:01:58.142 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:58.142244 | orchestrator | 00:01:58.142 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:58.142253 | orchestrator | 00:01:58.142 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.142259 | orchestrator | 00:01:58.142 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.142304 | orchestrator | 00:01:58.142 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:58.142310 | orchestrator | 00:01:58.142 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:58.142348 | orchestrator | 00:01:58.142 STDOUT terraform:  + name = "testbed-node-1" 2025-09-08 00:01:58.142355 | orchestrator | 00:01:58.142 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:58.142398 | orchestrator | 00:01:58.142 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.142474 | orchestrator | 00:01:58.142 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:58.142480 | orchestrator | 00:01:58.142 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:58.142491 | orchestrator | 00:01:58.142 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:58.142547 | orchestrator | 00:01:58.142 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:58.142553 | orchestrator | 00:01:58.142 STDOUT terraform:  + block_device { 2025-09-08 00:01:58.142560 | orchestrator | 00:01:58.142 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:58.142672 | orchestrator | 00:01:58.142 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:58.142679 | orchestrator | 00:01:58.142 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:58.142682 | orchestrator | 00:01:58.142 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:58.142688 | orchestrator | 00:01:58.142 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:58.142790 | orchestrator | 00:01:58.142 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.142802 | orchestrator | 00:01:58.142 STDOUT terraform:  } 2025-09-08 00:01:58.142806 | orchestrator | 00:01:58.142 STDOUT terraform:  + network { 2025-09-08 00:01:58.142809 | orchestrator | 00:01:58.142 STDOUT terraform:  + access_network = false 2025-09-08 00:01:58.142813 | orchestrator | 00:01:58.142 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:58.142817 | orchestrator | 00:01:58.142 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:58.142847 | orchestrator | 00:01:58.142 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:58.142875 | orchestrator | 00:01:58.142 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.142901 | orchestrator | 00:01:58.142 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:58.142944 | orchestrator | 00:01:58.142 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.142948 | orchestrator | 00:01:58.142 STDOUT terraform:  } 2025-09-08 00:01:58.142998 | orchestrator | 00:01:58.142 STDOUT terraform:  } 2025-09-08 00:01:58.143067 | orchestrator | 00:01:58.142 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-08 00:01:58.143072 | orchestrator | 00:01:58.142 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:58.143080 | orchestrator | 00:01:58.143 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:58.143085 | orchestrator | 00:01:58.143 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:58.143132 | orchestrator | 00:01:58.143 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:58.143180 | orchestrator | 00:01:58.143 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.143281 | orchestrator | 00:01:58.143 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.143288 | orchestrator | 00:01:58.143 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:58.143292 | orchestrator | 00:01:58.143 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:58.143296 | orchestrator | 00:01:58.143 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:58.143300 | orchestrator | 00:01:58.143 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:58.143306 | orchestrator | 00:01:58.143 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:58.143333 | orchestrator | 00:01:58.143 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:58.143381 | orchestrator | 00:01:58.143 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.143394 | orchestrator | 00:01:58.143 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.143456 | orchestrator | 00:01:58.143 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:58.143466 | orchestrator | 00:01:58.143 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:58.143474 | orchestrator | 00:01:58.143 STDOUT terraform:  + name = "testbed-node-2" 2025-09-08 00:01:58.143542 | orchestrator | 00:01:58.143 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:58.143589 | orchestrator | 00:01:58.143 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.143601 | orchestrator | 00:01:58.143 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:58.143605 | orchestrator | 00:01:58.143 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:58.143611 | orchestrator | 00:01:58.143 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:58.143720 | orchestrator | 00:01:58.143 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:58.143726 | orchestrator | 00:01:58.143 STDOUT terraform:  + block_device { 2025-09-08 00:01:58.143730 | orchestrator | 00:01:58.143 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:58.143736 | orchestrator | 00:01:58.143 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:58.143792 | orchestrator | 00:01:58.143 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:58.143812 | orchestrator | 00:01:58.143 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:58.143817 | orchestrator | 00:01:58.143 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:58.143876 | orchestrator | 00:01:58.143 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.143885 | orchestrator | 00:01:58.143 STDOUT terraform:  } 2025-09-08 00:01:58.143889 | orchestrator | 00:01:58.143 STDOUT terraform:  + network { 2025-09-08 00:01:58.143892 | orchestrator | 00:01:58.143 STDOUT terraform:  + access_network = false 2025-09-08 00:01:58.143898 | orchestrator | 00:01:58.143 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:58.143966 | orchestrator | 00:01:58.143 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:58.143977 | orchestrator | 00:01:58.143 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:58.143982 | orchestrator | 00:01:58.143 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.144028 | orchestrator | 00:01:58.143 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:58.144077 | orchestrator | 00:01:58.144 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.144097 | orchestrator | 00:01:58.144 STDOUT terraform:  } 2025-09-08 00:01:58.144152 | orchestrator | 00:01:58.144 STDOUT terraform:  } 2025-09-08 00:01:58.144159 | orchestrator | 00:01:58.144 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-08 00:01:58.144163 | orchestrator | 00:01:58.144 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:58.144169 | orchestrator | 00:01:58.144 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:58.144241 | orchestrator | 00:01:58.144 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:58.144247 | orchestrator | 00:01:58.144 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:58.144293 | orchestrator | 00:01:58.144 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.144321 | orchestrator | 00:01:58.144 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.144342 | orchestrator | 00:01:58.144 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:58.144382 | orchestrator | 00:01:58.144 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:58.144386 | orchestrator | 00:01:58.144 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:58.144468 | orchestrator | 00:01:58.144 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:58.144520 | orchestrator | 00:01:58.144 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:58.144528 | orchestrator | 00:01:58.144 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:58.144534 | orchestrator | 00:01:58.144 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.144537 | orchestrator | 00:01:58.144 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.144541 | orchestrator | 00:01:58.144 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:58.144547 | orchestrator | 00:01:58.144 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:58.144606 | orchestrator | 00:01:58.144 STDOUT terraform:  + name = "testbed-node-3" 2025-09-08 00:01:58.144612 | orchestrator | 00:01:58.144 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:58.144625 | orchestrator | 00:01:58.144 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.144688 | orchestrator | 00:01:58.144 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:58.144698 | orchestrator | 00:01:58.144 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:58.144735 | orchestrator | 00:01:58.144 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:58.144790 | orchestrator | 00:01:58.144 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:58.144796 | orchestrator | 00:01:58.144 STDOUT terraform:  + block_device { 2025-09-08 00:01:58.144801 | orchestrator | 00:01:58.144 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:58.144858 | orchestrator | 00:01:58.144 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:58.144868 | orchestrator | 00:01:58.144 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:58.144874 | orchestrator | 00:01:58.144 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:58.144932 | orchestrator | 00:01:58.144 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:58.145020 | orchestrator | 00:01:58.144 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.145025 | orchestrator | 00:01:58.144 STDOUT terraform:  } 2025-09-08 00:01:58.145029 | orchestrator | 00:01:58.144 STDOUT terraform:  + network { 2025-09-08 00:01:58.145033 | orchestrator | 00:01:58.144 STDOUT terraform:  + access_network = false 2025-09-08 00:01:58.145037 | orchestrator | 00:01:58.144 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:58.145053 | orchestrator | 00:01:58.144 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:58.145057 | orchestrator | 00:01:58.145 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:58.145156 | orchestrator | 00:01:58.145 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.145218 | orchestrator | 00:01:58.145 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:58.145222 | orchestrator | 00:01:58.145 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.145228 | orchestrator | 00:01:58.145 STDOUT terraform:  } 2025-09-08 00:01:58.145232 | orchestrator | 00:01:58.145 STDOUT terraform:  } 2025-09-08 00:01:58.145236 | orchestrator | 00:01:58.145 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-08 00:01:58.145271 | orchestrator | 00:01:58.145 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:58.145312 | orchestrator | 00:01:58.145 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:58.145370 | orchestrator | 00:01:58.145 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:58.145383 | orchestrator | 00:01:58.145 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:58.146163 | orchestrator | 00:01:58.145 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.146178 | orchestrator | 00:01:58.146 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.146217 | orchestrator | 00:01:58.146 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:58.146272 | orchestrator | 00:01:58.146 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:58.146279 | orchestrator | 00:01:58.146 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:58.146328 | orchestrator | 00:01:58.146 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:58.146339 | orchestrator | 00:01:58.146 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:58.146367 | orchestrator | 00:01:58.146 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:58.146389 | orchestrator | 00:01:58.146 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.146487 | orchestrator | 00:01:58.146 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.146497 | orchestrator | 00:01:58.146 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:58.146500 | orchestrator | 00:01:58.146 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:58.146507 | orchestrator | 00:01:58.146 STDOUT terraform:  + name = "testbed-node-4" 2025-09-08 00:01:58.146537 | orchestrator | 00:01:58.146 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:58.146584 | orchestrator | 00:01:58.146 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.146596 | orchestrator | 00:01:58.146 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:58.146622 | orchestrator | 00:01:58.146 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:58.146677 | orchestrator | 00:01:58.146 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:58.146716 | orchestrator | 00:01:58.146 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:58.146721 | orchestrator | 00:01:58.146 STDOUT terraform:  + block_device { 2025-09-08 00:01:58.146811 | orchestrator | 00:01:58.146 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:58.146821 | orchestrator | 00:01:58.146 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:58.146825 | orchestrator | 00:01:58.146 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:58.146828 | orchestrator | 00:01:58.146 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:58.146834 | orchestrator | 00:01:58.146 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:58.146879 | orchestrator | 00:01:58.146 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.146888 | orchestrator | 00:01:58.146 STDOUT terraform:  } 2025-09-08 00:01:58.146894 | orchestrator | 00:01:58.146 STDOUT terraform:  + network { 2025-09-08 00:01:58.146932 | orchestrator | 00:01:58.146 STDOUT terraform:  + access_network = false 2025-09-08 00:01:58.146944 | orchestrator | 00:01:58.146 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:58.147024 | orchestrator | 00:01:58.146 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:58.147030 | orchestrator | 00:01:58.146 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:58.147040 | orchestrator | 00:01:58.146 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.147077 | orchestrator | 00:01:58.147 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:58.147098 | orchestrator | 00:01:58.147 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.147131 | orchestrator | 00:01:58.147 STDOUT terraform:  } 2025-09-08 00:01:58.147157 | orchestrator | 00:01:58.147 STDOUT terraform:  } 2025-09-08 00:01:58.147244 | orchestrator | 00:01:58.147 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-08 00:01:58.147308 | orchestrator | 00:01:58.147 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:58.147325 | orchestrator | 00:01:58.147 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:58.147330 | orchestrator | 00:01:58.147 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:58.147334 | orchestrator | 00:01:58.147 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:58.147411 | orchestrator | 00:01:58.147 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.147499 | orchestrator | 00:01:58.147 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:58.147505 | orchestrator | 00:01:58.147 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:58.147509 | orchestrator | 00:01:58.147 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:58.147513 | orchestrator | 00:01:58.147 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:58.147517 | orchestrator | 00:01:58.147 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:58.147521 | orchestrator | 00:01:58.147 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:58.147526 | orchestrator | 00:01:58.147 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:58.147607 | orchestrator | 00:01:58.147 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.147682 | orchestrator | 00:01:58.147 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:58.147712 | orchestrator | 00:01:58.147 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:58.147716 | orchestrator | 00:01:58.147 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:58.147720 | orchestrator | 00:01:58.147 STDOUT terraform:  + name = "testbed-node-5" 2025-09-08 00:01:58.147739 | orchestrator | 00:01:58.147 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:58.147746 | orchestrator | 00:01:58.147 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.147750 | orchestrator | 00:01:58.147 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:58.147805 | orchestrator | 00:01:58.147 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:58.147822 | orchestrator | 00:01:58.147 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:58.147852 | orchestrator | 00:01:58.147 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:58.147857 | orchestrator | 00:01:58.147 STDOUT terraform:  + block_device { 2025-09-08 00:01:58.147882 | orchestrator | 00:01:58.147 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:58.147920 | orchestrator | 00:01:58.147 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:58.148013 | orchestrator | 00:01:58.147 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:58.148017 | orchestrator | 00:01:58.147 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:58.148021 | orchestrator | 00:01:58.147 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:58.148147 | orchestrator | 00:01:58.147 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.148165 | orchestrator | 00:01:58.147 STDOUT terraform:  } 2025-09-08 00:01:58.148290 | orchestrator | 00:01:58.148 STDOUT terraform:  + network { 2025-09-08 00:01:58.148308 | orchestrator | 00:01:58.148 STDOUT terraform:  + access_network = false 2025-09-08 00:01:58.148337 | orchestrator | 00:01:58.148 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:58.148381 | orchestrator | 00:01:58.148 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:58.148526 | orchestrator | 00:01:58.148 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:58.148643 | orchestrator | 00:01:58.148 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:58.148655 | orchestrator | 00:01:58.148 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:58.148659 | orchestrator | 00:01:58.148 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:58.148663 | orchestrator | 00:01:58.148 STDOUT terraform:  } 2025-09-08 00:01:58.148667 | orchestrator | 00:01:58.148 STDOUT terraform:  } 2025-09-08 00:01:58.148671 | orchestrator | 00:01:58.148 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-08 00:01:58.148674 | orchestrator | 00:01:58.148 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-08 00:01:58.148686 | orchestrator | 00:01:58.148 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-08 00:01:58.148690 | orchestrator | 00:01:58.148 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.148694 | orchestrator | 00:01:58.148 STDOUT terraform:  + name = "testbed" 2025-09-08 00:01:58.148698 | orchestrator | 00:01:58.148 STDOUT terraform:  + private_key = (sensitive value) 2025-09-08 00:01:58.148701 | orchestrator | 00:01:58.148 STDOUT terraform:  + public_key = (known after apply) 2025-09-08 00:01:58.148705 | orchestrator | 00:01:58.148 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.148711 | orchestrator | 00:01:58.148 STDOUT terraform:  + user_id = (known after apply) 2025-09-08 00:01:58.148715 | orchestrator | 00:01:58.148 STDOUT terraform:  } 2025-09-08 00:01:58.148719 | orchestrator | 00:01:58.148 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-08 00:01:58.148723 | orchestrator | 00:01:58.148 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.148727 | orchestrator | 00:01:58.148 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.148731 | orchestrator | 00:01:58.148 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.148738 | orchestrator | 00:01:58.148 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.148742 | orchestrator | 00:01:58.148 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.148746 | orchestrator | 00:01:58.148 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.148749 | orchestrator | 00:01:58.148 STDOUT terraform:  } 2025-09-08 00:01:58.148755 | orchestrator | 00:01:58.148 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-08 00:01:58.148759 | orchestrator | 00:01:58.148 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.148763 | orchestrator | 00:01:58.148 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.148820 | orchestrator | 00:01:58.148 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.148826 | orchestrator | 00:01:58.148 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.148832 | orchestrator | 00:01:58.148 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.148974 | orchestrator | 00:01:58.148 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.148980 | orchestrator | 00:01:58.148 STDOUT terraform:  } 2025-09-08 00:01:58.148984 | orchestrator | 00:01:58.148 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-08 00:01:58.148989 | orchestrator | 00:01:58.148 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.148995 | orchestrator | 00:01:58.148 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.149001 | orchestrator | 00:01:58.148 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.149036 | orchestrator | 00:01:58.148 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.149059 | orchestrator | 00:01:58.149 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.149161 | orchestrator | 00:01:58.149 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.149174 | orchestrator | 00:01:58.149 STDOUT terraform:  } 2025-09-08 00:01:58.149178 | orchestrator | 00:01:58.149 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-08 00:01:58.149185 | orchestrator | 00:01:58.149 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.149244 | orchestrator | 00:01:58.149 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.149249 | orchestrator | 00:01:58.149 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.149277 | orchestrator | 00:01:58.149 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.149338 | orchestrator | 00:01:58.149 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.149342 | orchestrator | 00:01:58.149 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.149346 | orchestrator | 00:01:58.149 STDOUT terraform:  } 2025-09-08 00:01:58.149396 | orchestrator | 00:01:58.149 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-08 00:01:58.149476 | orchestrator | 00:01:58.149 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.149481 | orchestrator | 00:01:58.149 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.149484 | orchestrator | 00:01:58.149 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.149490 | orchestrator | 00:01:58.149 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.149514 | orchestrator | 00:01:58.149 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.149554 | orchestrator | 00:01:58.149 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.149583 | orchestrator | 00:01:58.149 STDOUT terraform:  } 2025-09-08 00:01:58.149683 | orchestrator | 00:01:58.149 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-08 00:01:58.149693 | orchestrator | 00:01:58.149 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.149696 | orchestrator | 00:01:58.149 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.149700 | orchestrator | 00:01:58.149 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.149706 | orchestrator | 00:01:58.149 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.149843 | orchestrator | 00:01:58.149 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.149853 | orchestrator | 00:01:58.149 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.149856 | orchestrator | 00:01:58.149 STDOUT terraform:  } 2025-09-08 00:01:58.149860 | orchestrator | 00:01:58.149 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-08 00:01:58.149867 | orchestrator | 00:01:58.149 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.149873 | orchestrator | 00:01:58.149 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.149907 | orchestrator | 00:01:58.149 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.149941 | orchestrator | 00:01:58.149 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.149979 | orchestrator | 00:01:58.149 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.149989 | orchestrator | 00:01:58.149 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.149995 | orchestrator | 00:01:58.149 STDOUT terraform:  } 2025-09-08 00:01:58.150137 | orchestrator | 00:01:58.149 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-08 00:01:58.150199 | orchestrator | 00:01:58.150 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.150204 | orchestrator | 00:01:58.150 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.150208 | orchestrator | 00:01:58.150 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.150378 | orchestrator | 00:01:58.150 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.150389 | orchestrator | 00:01:58.150 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.150399 | orchestrator | 00:01:58.150 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.150403 | orchestrator | 00:01:58.150 STDOUT terraform:  } 2025-09-08 00:01:58.150407 | orchestrator | 00:01:58.150 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-08 00:01:58.150411 | orchestrator | 00:01:58.150 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:58.150420 | orchestrator | 00:01:58.150 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:58.150424 | orchestrator | 00:01:58.150 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.150451 | orchestrator | 00:01:58.150 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:58.150559 | orchestrator | 00:01:58.150 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.150565 | orchestrator | 00:01:58.150 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:58.150569 | orchestrator | 00:01:58.150 STDOUT terraform:  } 2025-09-08 00:01:58.150578 | orchestrator | 00:01:58.150 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-08 00:01:58.150690 | orchestrator | 00:01:58.150 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-08 00:01:58.150707 | orchestrator | 00:01:58.150 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-08 00:01:58.150784 | orchestrator | 00:01:58.150 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-08 00:01:58.150832 | orchestrator | 00:01:58.150 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.150859 | orchestrator | 00:01:58.150 STDOUT terraform:  + port_id = (known after apply) 2025-09-08 00:01:58.150863 | orchestrator | 00:01:58.150 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.150927 | orchestrator | 00:01:58.150 STDOUT terraform:  } 2025-09-08 00:01:58.150953 | orchestrator | 00:01:58.150 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-08 00:01:58.150957 | orchestrator | 00:01:58.150 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-08 00:01:58.151054 | orchestrator | 00:01:58.150 STDOUT terraform:  + address = (known after apply) 2025-09-08 00:01:58.151102 | orchestrator | 00:01:58.150 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.151136 | orchestrator | 00:01:58.150 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-08 00:01:58.151141 | orchestrator | 00:01:58.150 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.151331 | orchestrator | 00:01:58.150 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-08 00:01:58.151341 | orchestrator | 00:01:58.150 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.151345 | orchestrator | 00:01:58.150 STDOUT terraform:  + pool = "public" 2025-09-08 00:01:58.151349 | orchestrator | 00:01:58.150 STDOUT terraform:  + port_id = (known after apply) 2025-09-08 00:01:58.151353 | orchestrator | 00:01:58.150 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.151356 | orchestrator | 00:01:58.150 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.151365 | orchestrator | 00:01:58.151 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.151369 | orchestrator | 00:01:58.151 STDOUT terraform:  } 2025-09-08 00:01:58.151375 | orchestrator | 00:01:58.151 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-08 00:01:58.151379 | orchestrator | 00:01:58.151 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-08 00:01:58.151383 | orchestrator | 00:01:58.151 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.151387 | orchestrator | 00:01:58.151 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.151391 | orchestrator | 00:01:58.151 STDOUT terraform:  + availability_zone_hints = [ 2025-09-08 00:01:58.151395 | orchestrator | 00:01:58.151 STDOUT terraform:  + "nova", 2025-09-08 00:01:58.151399 | orchestrator | 00:01:58.151 STDOUT terraform:  ] 2025-09-08 00:01:58.151403 | orchestrator | 00:01:58.151 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-08 00:01:58.151407 | orchestrator | 00:01:58.151 STDOUT terraform:  + external = (known after apply) 2025-09-08 00:01:58.151411 | orchestrator | 00:01:58.151 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.151415 | orchestrator | 00:01:58.151 STDOUT terraform:  + mtu = (known after apply) 2025-09-08 00:01:58.151421 | orchestrator | 00:01:58.151 STDOUT terraform:  + name = "net-testbed-management" 2025-09-08 00:01:58.151425 | orchestrator | 00:01:58.151 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.151540 | orchestrator | 00:01:58.151 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.151596 | orchestrator | 00:01:58.151 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.151604 | orchestrator | 00:01:58.151 STDOUT terraform:  + shared = (known after apply) 2025-09-08 00:01:58.151611 | orchestrator | 00:01:58.151 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.151615 | orchestrator | 00:01:58.151 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-08 00:01:58.151619 | orchestrator | 00:01:58.151 STDOUT terraform:  + segments (known after apply) 2025-09-08 00:01:58.151623 | orchestrator | 00:01:58.151 STDOUT terraform:  } 2025-09-08 00:01:58.151712 | orchestrator | 00:01:58.151 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-08 00:01:58.151718 | orchestrator | 00:01:58.151 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-08 00:01:58.151725 | orchestrator | 00:01:58.151 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.151811 | orchestrator | 00:01:58.151 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:58.151818 | orchestrator | 00:01:58.151 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:58.151824 | orchestrator | 00:01:58.151 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.151856 | orchestrator | 00:01:58.151 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:58.151892 | orchestrator | 00:01:58.151 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:58.151994 | orchestrator | 00:01:58.151 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:58.152008 | orchestrator | 00:01:58.151 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.152012 | orchestrator | 00:01:58.151 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.152018 | orchestrator | 00:01:58.151 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:58.152090 | orchestrator | 00:01:58.152 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.152098 | orchestrator | 00:01:58.152 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.152168 | orchestrator | 00:01:58.152 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.152179 | orchestrator | 00:01:58.152 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.152225 | orchestrator | 00:01:58.152 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:58.152232 | orchestrator | 00:01:58.152 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.152314 | orchestrator | 00:01:58.152 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.152319 | orchestrator | 00:01:58.152 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:58.152322 | orchestrator | 00:01:58.152 STDOUT terraform:  } 2025-09-08 00:01:58.152326 | orchestrator | 00:01:58.152 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.152330 | orchestrator | 00:01:58.152 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:58.152335 | orchestrator | 00:01:58.152 STDOUT terraform:  } 2025-09-08 00:01:58.152341 | orchestrator | 00:01:58.152 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:58.152429 | orchestrator | 00:01:58.152 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:58.152441 | orchestrator | 00:01:58.152 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-08 00:01:58.152445 | orchestrator | 00:01:58.152 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.152449 | orchestrator | 00:01:58.152 STDOUT terraform:  } 2025-09-08 00:01:58.152453 | orchestrator | 00:01:58.152 STDOUT terraform:  } 2025-09-08 00:01:58.152459 | orchestrator | 00:01:58.152 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-08 00:01:58.152558 | orchestrator | 00:01:58.152 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:58.152622 | orchestrator | 00:01:58.152 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.152653 | orchestrator | 00:01:58.152 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:58.152688 | orchestrator | 00:01:58.152 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:58.152694 | orchestrator | 00:01:58.152 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.152698 | orchestrator | 00:01:58.152 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:58.152705 | orchestrator | 00:01:58.152 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:58.152802 | orchestrator | 00:01:58.152 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:58.152911 | orchestrator | 00:01:58.152 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.152971 | orchestrator | 00:01:58.152 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.153091 | orchestrator | 00:01:58.152 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:58.153224 | orchestrator | 00:01:58.152 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.153234 | orchestrator | 00:01:58.152 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.153238 | orchestrator | 00:01:58.152 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.153242 | orchestrator | 00:01:58.152 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.153245 | orchestrator | 00:01:58.152 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:58.153249 | orchestrator | 00:01:58.152 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.153253 | orchestrator | 00:01:58.152 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.153257 | orchestrator | 00:01:58.153 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:58.153261 | orchestrator | 00:01:58.153 STDOUT terraform:  } 2025-09-08 00:01:58.153268 | orchestrator | 00:01:58.153 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.153272 | orchestrator | 00:01:58.153 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:58.153276 | orchestrator | 00:01:58.153 STDOUT terraform:  } 2025-09-08 00:01:58.153280 | orchestrator | 00:01:58.153 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.153284 | orchestrator | 00:01:58.153 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:58.153287 | orchestrator | 00:01:58.153 STDOUT terraform:  } 2025-09-08 00:01:58.153291 | orchestrator | 00:01:58.153 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.153295 | orchestrator | 00:01:58.153 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:58.153299 | orchestrator | 00:01:58.153 STDOUT terraform:  } 2025-09-08 00:01:58.153302 | orchestrator | 00:01:58.153 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:58.153306 | orchestrator | 00:01:58.153 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:58.153310 | orchestrator | 00:01:58.153 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-08 00:01:58.153314 | orchestrator | 00:01:58.153 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.153319 | orchestrator | 00:01:58.153 STDOUT terraform:  } 2025-09-08 00:01:58.153323 | orchestrator | 00:01:58.153 STDOUT terraform:  } 2025-09-08 00:01:58.153329 | orchestrator | 00:01:58.153 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-08 00:01:58.153405 | orchestrator | 00:01:58.153 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:58.153480 | orchestrator | 00:01:58.153 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.153498 | orchestrator | 00:01:58.153 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:58.153502 | orchestrator | 00:01:58.153 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:58.153526 | orchestrator | 00:01:58.153 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.153532 | orchestrator | 00:01:58.153 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:58.153647 | orchestrator | 00:01:58.153 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:58.153652 | orchestrator | 00:01:58.153 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:58.153682 | orchestrator | 00:01:58.153 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.153848 | orchestrator | 00:01:58.153 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.153859 | orchestrator | 00:01:58.153 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:58.153863 | orchestrator | 00:01:58.153 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.153867 | orchestrator | 00:01:58.153 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.153871 | orchestrator | 00:01:58.153 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.153874 | orchestrator | 00:01:58.153 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.153878 | orchestrator | 00:01:58.153 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:58.153884 | orchestrator | 00:01:58.153 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.153888 | orchestrator | 00:01:58.153 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.153915 | orchestrator | 00:01:58.153 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:58.153923 | orchestrator | 00:01:58.153 STDOUT terraform:  } 2025-09-08 00:01:58.153982 | orchestrator | 00:01:58.153 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.154006 | orchestrator | 00:01:58.153 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:58.154031 | orchestrator | 00:01:58.154 STDOUT terraform:  } 2025-09-08 00:01:58.154126 | orchestrator | 00:01:58.154 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.154136 | orchestrator | 00:01:58.154 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:58.154140 | orchestrator | 00:01:58.154 STDOUT terraform:  } 2025-09-08 00:01:58.154144 | orchestrator | 00:01:58.154 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.154150 | orchestrator | 00:01:58.154 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:58.154155 | orchestrator | 00:01:58.154 STDOUT terraform:  } 2025-09-08 00:01:58.154247 | orchestrator | 00:01:58.154 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:58.154258 | orchestrator | 00:01:58.154 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:58.154267 | orchestrator | 00:01:58.154 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-08 00:01:58.154271 | orchestrator | 00:01:58.154 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.154275 | orchestrator | 00:01:58.154 STDOUT terraform:  } 2025-09-08 00:01:58.154278 | orchestrator | 00:01:58.154 STDOUT terraform:  } 2025-09-08 00:01:58.154374 | orchestrator | 00:01:58.154 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-08 00:01:58.154379 | orchestrator | 00:01:58.154 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:58.154385 | orchestrator | 00:01:58.154 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.154417 | orchestrator | 00:01:58.154 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:58.154454 | orchestrator | 00:01:58.154 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:58.154481 | orchestrator | 00:01:58.154 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.154560 | orchestrator | 00:01:58.154 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:58.154571 | orchestrator | 00:01:58.154 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:58.154592 | orchestrator | 00:01:58.154 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:58.154669 | orchestrator | 00:01:58.154 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.154677 | orchestrator | 00:01:58.154 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.154735 | orchestrator | 00:01:58.154 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:58.154745 | orchestrator | 00:01:58.154 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.154751 | orchestrator | 00:01:58.154 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.154830 | orchestrator | 00:01:58.154 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.154836 | orchestrator | 00:01:58.154 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.154841 | orchestrator | 00:01:58.154 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:58.154894 | orchestrator | 00:01:58.154 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.154900 | orchestrator | 00:01:58.154 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.154955 | orchestrator | 00:01:58.154 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:58.154974 | orchestrator | 00:01:58.154 STDOUT terraform:  } 2025-09-08 00:01:58.154978 | orchestrator | 00:01:58.154 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.155000 | orchestrator | 00:01:58.154 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:58.155025 | orchestrator | 00:01:58.154 STDOUT terraform:  } 2025-09-08 00:01:58.155029 | orchestrator | 00:01:58.154 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.155141 | orchestrator | 00:01:58.154 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:58.155151 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.155155 | orchestrator | 00:01:58.155 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.155158 | orchestrator | 00:01:58.155 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:58.155162 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.155166 | orchestrator | 00:01:58.155 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:58.155170 | orchestrator | 00:01:58.155 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:58.155173 | orchestrator | 00:01:58.155 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-08 00:01:58.155179 | orchestrator | 00:01:58.155 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.155183 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.155187 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.155261 | orchestrator | 00:01:58.155 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-08 00:01:58.155373 | orchestrator | 00:01:58.155 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:58.155383 | orchestrator | 00:01:58.155 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.155387 | orchestrator | 00:01:58.155 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:58.155392 | orchestrator | 00:01:58.155 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:58.155396 | orchestrator | 00:01:58.155 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.155404 | orchestrator | 00:01:58.155 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:58.155503 | orchestrator | 00:01:58.155 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:58.155544 | orchestrator | 00:01:58.155 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:58.155586 | orchestrator | 00:01:58.155 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.155599 | orchestrator | 00:01:58.155 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.155604 | orchestrator | 00:01:58.155 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:58.155607 | orchestrator | 00:01:58.155 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.155636 | orchestrator | 00:01:58.155 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.155752 | orchestrator | 00:01:58.155 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.155756 | orchestrator | 00:01:58.155 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.155760 | orchestrator | 00:01:58.155 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:58.155766 | orchestrator | 00:01:58.155 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.155771 | orchestrator | 00:01:58.155 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.155808 | orchestrator | 00:01:58.155 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:58.155890 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.155903 | orchestrator | 00:01:58.155 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.155906 | orchestrator | 00:01:58.155 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:58.155910 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.155914 | orchestrator | 00:01:58.155 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.155918 | orchestrator | 00:01:58.155 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:58.155921 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.155927 | orchestrator | 00:01:58.155 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.155972 | orchestrator | 00:01:58.155 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:58.156086 | orchestrator | 00:01:58.155 STDOUT terraform:  } 2025-09-08 00:01:58.156097 | orchestrator | 00:01:58.155 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:58.156101 | orchestrator | 00:01:58.155 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:58.156105 | orchestrator | 00:01:58.155 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-08 00:01:58.156121 | orchestrator | 00:01:58.156 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.156125 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.156129 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.156133 | orchestrator | 00:01:58.156 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-08 00:01:58.156213 | orchestrator | 00:01:58.156 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:58.156219 | orchestrator | 00:01:58.156 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.156225 | orchestrator | 00:01:58.156 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:58.156279 | orchestrator | 00:01:58.156 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:58.156287 | orchestrator | 00:01:58.156 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.156385 | orchestrator | 00:01:58.156 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:58.156395 | orchestrator | 00:01:58.156 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:58.156400 | orchestrator | 00:01:58.156 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:58.156457 | orchestrator | 00:01:58.156 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.156463 | orchestrator | 00:01:58.156 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.156498 | orchestrator | 00:01:58.156 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:58.156561 | orchestrator | 00:01:58.156 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.156575 | orchestrator | 00:01:58.156 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.156586 | orchestrator | 00:01:58.156 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.156644 | orchestrator | 00:01:58.156 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.156657 | orchestrator | 00:01:58.156 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:58.156732 | orchestrator | 00:01:58.156 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.156742 | orchestrator | 00:01:58.156 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.156746 | orchestrator | 00:01:58.156 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:58.156751 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.156755 | orchestrator | 00:01:58.156 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.156813 | orchestrator | 00:01:58.156 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:58.156819 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.156823 | orchestrator | 00:01:58.156 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.156828 | orchestrator | 00:01:58.156 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:58.156928 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.156934 | orchestrator | 00:01:58.156 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.156937 | orchestrator | 00:01:58.156 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:58.156941 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.156945 | orchestrator | 00:01:58.156 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:58.156949 | orchestrator | 00:01:58.156 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:58.156954 | orchestrator | 00:01:58.156 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-08 00:01:58.156972 | orchestrator | 00:01:58.156 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.156978 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.157016 | orchestrator | 00:01:58.156 STDOUT terraform:  } 2025-09-08 00:01:58.157049 | orchestrator | 00:01:58.156 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-08 00:01:58.157133 | orchestrator | 00:01:58.157 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:58.157142 | orchestrator | 00:01:58.157 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.157195 | orchestrator | 00:01:58.157 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:58.157269 | orchestrator | 00:01:58.157 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:58.157275 | orchestrator | 00:01:58.157 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.157279 | orchestrator | 00:01:58.157 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:58.157284 | orchestrator | 00:01:58.157 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:58.157345 | orchestrator | 00:01:58.157 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:58.157361 | orchestrator | 00:01:58.157 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:58.157404 | orchestrator | 00:01:58.157 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.157436 | orchestrator | 00:01:58.157 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:58.157567 | orchestrator | 00:01:58.157 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.157572 | orchestrator | 00:01:58.157 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:58.157576 | orchestrator | 00:01:58.157 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:58.157579 | orchestrator | 00:01:58.157 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.157585 | orchestrator | 00:01:58.157 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:58.157646 | orchestrator | 00:01:58.157 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.157664 | orchestrator | 00:01:58.157 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.157751 | orchestrator | 00:01:58.157 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:58.157755 | orchestrator | 00:01:58.157 STDOUT terraform:  } 2025-09-08 00:01:58.157759 | orchestrator | 00:01:58.157 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.157763 | orchestrator | 00:01:58.157 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:58.157766 | orchestrator | 00:01:58.157 STDOUT terraform:  } 2025-09-08 00:01:58.157770 | orchestrator | 00:01:58.157 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.157774 | orchestrator | 00:01:58.157 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:58.157778 | orchestrator | 00:01:58.157 STDOUT terraform:  } 2025-09-08 00:01:58.157783 | orchestrator | 00:01:58.157 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:58.157789 | orchestrator | 00:01:58.157 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:58.157831 | orchestrator | 00:01:58.157 STDOUT terraform:  } 2025-09-08 00:01:58.157841 | orchestrator | 00:01:58.157 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:58.157845 | orchestrator | 00:01:58.157 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:58.157851 | orchestrator | 00:01:58.157 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-08 00:01:58.157899 | orchestrator | 00:01:58.157 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.157904 | orchestrator | 00:01:58.157 STDOUT terraform:  } 2025-09-08 00:01:58.157924 | orchestrator | 00:01:58.157 STDOUT terraform:  } 2025-09-08 00:01:58.158045 | orchestrator | 00:01:58.157 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-08 00:01:58.158055 | orchestrator | 00:01:58.157 STDOUT terraform:  + resource "openstack_networking_router_interfac 2025-09-08 00:01:58.158097 | orchestrator | 00:01:58.158 STDOUT terraform: e_v2" "router_interface" { 2025-09-08 00:01:58.158137 | orchestrator | 00:01:58.158 STDOUT terraform:  + force_destroy = false 2025-09-08 00:01:58.158141 | orchestrator | 00:01:58.158 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.158244 | orchestrator | 00:01:58.158 STDOUT terraform:  + port_id = (known after apply) 2025-09-08 00:01:58.158318 | orchestrator | 00:01:58.158 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.158338 | orchestrator | 00:01:58.158 STDOUT terraform:  + router_id = (known after apply) 2025-09-08 00:01:58.158397 | orchestrator | 00:01:58.158 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:58.158402 | orchestrator | 00:01:58.158 STDOUT terraform:  } 2025-09-08 00:01:58.158432 | orchestrator | 00:01:58.158 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-08 00:01:58.158602 | orchestrator | 00:01:58.158 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-08 00:01:58.158652 | orchestrator | 00:01:58.158 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:58.158661 | orchestrator | 00:01:58.158 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.158665 | orchestrator | 00:01:58.158 STDOUT terraform:  + availability_zone_hints = [ 2025-09-08 00:01:58.158669 | orchestrator | 00:01:58.158 STDOUT terraform:  + "nova", 2025-09-08 00:01:58.158673 | orchestrator | 00:01:58.158 STDOUT terraform:  ] 2025-09-08 00:01:58.158677 | orchestrator | 00:01:58.158 STDOUT terraform:  + distributed = (known after apply) 2025-09-08 00:01:58.158681 | orchestrator | 00:01:58.158 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-08 00:01:58.158688 | orchestrator | 00:01:58.158 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-08 00:01:58.158697 | orchestrator | 00:01:58.158 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-08 00:01:58.158701 | orchestrator | 00:01:58.158 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.158705 | orchestrator | 00:01:58.158 STDOUT terraform:  + name = "testbed" 2025-09-08 00:01:58.158709 | orchestrator | 00:01:58.158 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.158713 | orchestrator | 00:01:58.158 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.158719 | orchestrator | 00:01:58.158 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-08 00:01:58.158723 | orchestrator | 00:01:58.158 STDOUT terraform:  } 2025-09-08 00:01:58.158727 | orchestrator | 00:01:58.158 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-08 00:01:58.158786 | orchestrator | 00:01:58.158 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-08 00:01:58.158812 | orchestrator | 00:01:58.158 STDOUT terraform:  + description = "ssh" 2025-09-08 00:01:58.158818 | orchestrator | 00:01:58.158 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.158874 | orchestrator | 00:01:58.158 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.158934 | orchestrator | 00:01:58.158 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.159008 | orchestrator | 00:01:58.158 STDOUT terraform:  + port_range_max = 22 2025-09-08 00:01:58.159017 | orchestrator | 00:01:58.158 STDOUT terraform:  + port_range_min = 22 2025-09-08 00:01:58.159021 | orchestrator | 00:01:58.158 STDOUT terraform:  + protocol = "tcp" 2025-09-08 00:01:58.159027 | orchestrator | 00:01:58.158 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.159031 | orchestrator | 00:01:58.158 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.159035 | orchestrator | 00:01:58.158 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.159039 | orchestrator | 00:01:58.158 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:58.159136 | orchestrator | 00:01:58.159 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.159153 | orchestrator | 00:01:58.159 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.159157 | orchestrator | 00:01:58.159 STDOUT terraform:  } 2025-09-08 00:01:58.159161 | orchestrator | 00:01:58.159 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-08 00:01:58.159212 | orchestrator | 00:01:58.159 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-08 00:01:58.159241 | orchestrator | 00:01:58.159 STDOUT terraform:  + description = "wireguard" 2025-09-08 00:01:58.159305 | orchestrator | 00:01:58.159 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.159316 | orchestrator | 00:01:58.159 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.159322 | orchestrator | 00:01:58.159 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.159375 | orchestrator | 00:01:58.159 STDOUT terraform:  + port_range_max = 51820 2025-09-08 00:01:58.159386 | orchestrator | 00:01:58.159 STDOUT terraform:  + port_range_min = 51820 2025-09-08 00:01:58.159392 | orchestrator | 00:01:58.159 STDOUT terraform:  + protocol = "udp" 2025-09-08 00:01:58.159458 | orchestrator | 00:01:58.159 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.159468 | orchestrator | 00:01:58.159 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.159503 | orchestrator | 00:01:58.159 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.159548 | orchestrator | 00:01:58.159 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:58.159566 | orchestrator | 00:01:58.159 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.159593 | orchestrator | 00:01:58.159 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.159650 | orchestrator | 00:01:58.159 STDOUT terraform:  } 2025-09-08 00:01:58.159662 | orchestrator | 00:01:58.159 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-08 00:01:58.159738 | orchestrator | 00:01:58.159 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-08 00:01:58.159758 | orchestrator | 00:01:58.159 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.159764 | orchestrator | 00:01:58.159 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.159890 | orchestrator | 00:01:58.159 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.159940 | orchestrator | 00:01:58.159 STDOUT terraform:  + protocol = "tcp" 2025-09-08 00:01:58.159949 | orchestrator | 00:01:58.159 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.159953 | orchestrator | 00:01:58.159 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.159959 | orchestrator | 00:01:58.159 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.159962 | orchestrator | 00:01:58.159 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-08 00:01:58.159966 | orchestrator | 00:01:58.159 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.160084 | orchestrator | 00:01:58.159 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.160094 | orchestrator | 00:01:58.159 STDOUT terraform:  } 2025-09-08 00:01:58.160101 | orchestrator | 00:01:58.159 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-08 00:01:58.160105 | orchestrator | 00:01:58.160 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-08 00:01:58.160125 | orchestrator | 00:01:58.160 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.160181 | orchestrator | 00:01:58.160 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.160189 | orchestrator | 00:01:58.160 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.160355 | orchestrator | 00:01:58.160 STDOUT terraform:  + protocol = "udp" 2025-09-08 00:01:58.160361 | orchestrator | 00:01:58.160 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.160365 | orchestrator | 00:01:58.160 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.160369 | orchestrator | 00:01:58.160 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.160373 | orchestrator | 00:01:58.160 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-08 00:01:58.160379 | orchestrator | 00:01:58.160 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.161271 | orchestrator | 00:01:58.160 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.161400 | orchestrator | 00:01:58.160 STDOUT terraform:  } 2025-09-08 00:01:58.161433 | orchestrator | 00:01:58.160 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-08 00:01:58.161438 | orchestrator | 00:01:58.160 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-08 00:01:58.161442 | orchestrator | 00:01:58.160 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.161506 | orchestrator | 00:01:58.160 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.161560 | orchestrator | 00:01:58.160 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.161663 | orchestrator | 00:01:58.160 STDOUT terraform:  + protocol = "icmp" 2025-09-08 00:01:58.161715 | orchestrator | 00:01:58.160 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.161720 | orchestrator | 00:01:58.160 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.161797 | orchestrator | 00:01:58.160 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.161816 | orchestrator | 00:01:58.160 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:58.161821 | orchestrator | 00:01:58.160 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.161825 | orchestrator | 00:01:58.160 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.161844 | orchestrator | 00:01:58.160 STDOUT terraform:  } 2025-09-08 00:01:58.161849 | orchestrator | 00:01:58.160 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-08 00:01:58.161853 | orchestrator | 00:01:58.160 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-08 00:01:58.161871 | orchestrator | 00:01:58.160 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.162034 | orchestrator | 00:01:58.160 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.162125 | orchestrator | 00:01:58.160 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.162276 | orchestrator | 00:01:58.160 STDOUT terraform:  + protocol = "tcp" 2025-09-08 00:01:58.162325 | orchestrator | 00:01:58.160 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.162353 | orchestrator | 00:01:58.160 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.162400 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.162419 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:58.162698 | orchestrator | 00:01:58.161 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.162784 | orchestrator | 00:01:58.161 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.162980 | orchestrator | 00:01:58.161 STDOUT terraform:  } 2025-09-08 00:01:58.163034 | orchestrator | 00:01:58.161 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-08 00:01:58.163099 | orchestrator | 00:01:58.161 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-08 00:01:58.163179 | orchestrator | 00:01:58.161 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.163212 | orchestrator | 00:01:58.161 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.163217 | orchestrator | 00:01:58.161 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.163240 | orchestrator | 00:01:58.161 STDOUT terraform:  + protocol = "udp" 2025-09-08 00:01:58.163244 | orchestrator | 00:01:58.161 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.163313 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.163462 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.163583 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:58.163612 | orchestrator | 00:01:58.161 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.163671 | orchestrator | 00:01:58.161 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.163766 | orchestrator | 00:01:58.161 STDOUT terraform:  } 2025-09-08 00:01:58.163798 | orchestrator | 00:01:58.161 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-08 00:01:58.163826 | orchestrator | 00:01:58.161 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-08 00:01:58.163862 | orchestrator | 00:01:58.161 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.163866 | orchestrator | 00:01:58.161 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.163870 | orchestrator | 00:01:58.161 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.163873 | orchestrator | 00:01:58.161 STDOUT terraform:  + protocol = "icmp" 2025-09-08 00:01:58.163893 | orchestrator | 00:01:58.161 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.163897 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.163915 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.163919 | orchestrator | 00:01:58.161 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:58.163923 | orchestrator | 00:01:58.161 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.163955 | orchestrator | 00:01:58.161 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.164021 | orchestrator | 00:01:58.161 STDOUT terraform:  } 2025-09-08 00:01:58.164074 | orchestrator | 00:01:58.161 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-08 00:01:58.164104 | orchestrator | 00:01:58.161 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-08 00:01:58.164257 | orchestrator | 00:01:58.161 STDOUT terraform:  + description = "vrrp" 2025-09-08 00:01:58.164422 | orchestrator | 00:01:58.162 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:58.164490 | orchestrator | 00:01:58.162 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:58.164508 | orchestrator | 00:01:58.162 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.164657 | orchestrator | 00:01:58.162 STDOUT terraform:  + protocol = "112" 2025-09-08 00:01:58.164734 | orchestrator | 00:01:58.162 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.164739 | orchestrator | 00:01:58.162 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:58.164743 | orchestrator | 00:01:58.162 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:58.164747 | orchestrator | 00:01:58.162 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:58.164761 | orchestrator | 00:01:58.162 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:58.164765 | orchestrator | 00:01:58.162 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.164769 | orchestrator | 00:01:58.162 STDOUT terraform:  } 2025-09-08 00:01:58.164773 | orchestrator | 00:01:58.162 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-08 00:01:58.164777 | orchestrator | 00:01:58.162 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-08 00:01:58.164781 | orchestrator | 00:01:58.162 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.164785 | orchestrator | 00:01:58.162 STDOUT terraform:  + description = "management security group" 2025-09-08 00:01:58.164788 | orchestrator | 00:01:58.162 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.164792 | orchestrator | 00:01:58.162 STDOUT terraform:  + name = "testbed-management" 2025-09-08 00:01:58.164796 | orchestrator | 00:01:58.162 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.164800 | orchestrator | 00:01:58.162 STDOUT terraform:  + stateful = (known after apply) 2025-09-08 00:01:58.164803 | orchestrator | 00:01:58.162 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.164807 | orchestrator | 00:01:58.162 STDOUT terraform:  } 2025-09-08 00:01:58.164811 | orchestrator | 00:01:58.162 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-08 00:01:58.164817 | orchestrator | 00:01:58.162 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-08 00:01:58.164825 | orchestrator | 00:01:58.162 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.164829 | orchestrator | 00:01:58.162 STDOUT terraform:  + description = "node security group" 2025-09-08 00:01:58.164832 | orchestrator | 00:01:58.162 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.164836 | orchestrator | 00:01:58.162 STDOUT terraform:  + name = "testbed-node" 2025-09-08 00:01:58.164840 | orchestrator | 00:01:58.162 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.164844 | orchestrator | 00:01:58.162 STDOUT terraform:  + stateful = (known after apply) 2025-09-08 00:01:58.164847 | orchestrator | 00:01:58.162 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.164851 | orchestrator | 00:01:58.162 STDOUT terraform:  } 2025-09-08 00:01:58.164855 | orchestrator | 00:01:58.162 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-08 00:01:58.164859 | orchestrator | 00:01:58.162 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-08 00:01:58.164863 | orchestrator | 00:01:58.163 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:58.164866 | orchestrator | 00:01:58.163 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-08 00:01:58.164870 | orchestrator | 00:01:58.163 STDOUT terraform:  + dns_nameservers = [ 2025-09-08 00:01:58.164874 | orchestrator | 00:01:58.163 STDOUT terraform:  + "8.8.8.8", 2025-09-08 00:01:58.164881 | orchestrator | 00:01:58.163 STDOUT terraform:  + "9.9.9.9", 2025-09-08 00:01:58.164885 | orchestrator | 00:01:58.163 STDOUT terraform:  ] 2025-09-08 00:01:58.164888 | orchestrator | 00:01:58.163 STDOUT terraform:  + enable_dhcp = true 2025-09-08 00:01:58.164892 | orchestrator | 00:01:58.163 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-08 00:01:58.164896 | orchestrator | 00:01:58.163 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.164900 | orchestrator | 00:01:58.163 STDOUT terraform:  + ip_version = 4 2025-09-08 00:01:58.164903 | orchestrator | 00:01:58.163 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-08 00:01:58.164907 | orchestrator | 00:01:58.163 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-08 00:01:58.164911 | orchestrator | 00:01:58.163 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-08 00:01:58.164919 | orchestrator | 00:01:58.163 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:58.164923 | orchestrator | 00:01:58.163 STDOUT terraform:  + no_gateway = false 2025-09-08 00:01:58.164927 | orchestrator | 00:01:58.163 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:58.164931 | orchestrator | 00:01:58.163 STDOUT terraform:  + service_types = (known after apply) 2025-09-08 00:01:58.164934 | orchestrator | 00:01:58.163 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:58.164938 | orchestrator | 00:01:58.163 STDOUT terraform:  + allocation_pool { 2025-09-08 00:01:58.164942 | orchestrator | 00:01:58.163 STDOUT terraform:  + end = "192.168.31.250" 2025-09-08 00:01:58.164946 | orchestrator | 00:01:58.163 STDOUT terraform:  + start = "192.168.31.200" 2025-09-08 00:01:58.164949 | orchestrator | 00:01:58.163 STDOUT terraform:  } 2025-09-08 00:01:58.164954 | orchestrator | 00:01:58.163 STDOUT terraform:  } 2025-09-08 00:01:58.164958 | orchestrator | 00:01:58.163 STDOUT terraform:  # terraform_data.image will be created 2025-09-08 00:01:58.164961 | orchestrator | 00:01:58.163 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-08 00:01:58.164965 | orchestrator | 00:01:58.163 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.164969 | orchestrator | 00:01:58.163 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-08 00:01:58.164973 | orchestrator | 00:01:58.163 STDOUT terraform:  + output = (known after apply) 2025-09-08 00:01:58.164976 | orchestrator | 00:01:58.163 STDOUT terraform:  } 2025-09-08 00:01:58.164982 | orchestrator | 00:01:58.163 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-08 00:01:58.164986 | orchestrator | 00:01:58.163 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-08 00:01:58.164990 | orchestrator | 00:01:58.163 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:58.164994 | orchestrator | 00:01:58.163 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-08 00:01:58.164998 | orchestrator | 00:01:58.163 STDOUT terraform:  + output = (known after apply) 2025-09-08 00:01:58.165001 | orchestrator | 00:01:58.163 STDOUT terraform:  } 2025-09-08 00:01:58.165005 | orchestrator | 00:01:58.163 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-08 00:01:58.165012 | orchestrator | 00:01:58.163 STDOUT terraform: Changes to Outputs: 2025-09-08 00:01:58.165016 | orchestrator | 00:01:58.163 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-08 00:01:58.165020 | orchestrator | 00:01:58.163 STDOUT terraform:  + private_key = (sensitive value) 2025-09-08 00:01:58.253652 | orchestrator | 00:01:58.249 STDOUT terraform: terraform_data.image: Creating... 2025-09-08 00:01:58.253723 | orchestrator | 00:01:58.249 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=fa833cc8-14c6-25d8-e269-a7f81f052c64] 2025-09-08 00:01:58.358876 | orchestrator | 00:01:58.357 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-08 00:01:58.358936 | orchestrator | 00:01:58.358 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=5f7e56f9-6ea4-c06f-a314-f0ed51b13054] 2025-09-08 00:01:58.396759 | orchestrator | 00:01:58.393 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-08 00:01:58.398045 | orchestrator | 00:01:58.395 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-08 00:01:58.406097 | orchestrator | 00:01:58.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-08 00:01:58.406133 | orchestrator | 00:01:58.406 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-08 00:01:58.411470 | orchestrator | 00:01:58.411 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-08 00:01:58.412683 | orchestrator | 00:01:58.412 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-08 00:01:58.415197 | orchestrator | 00:01:58.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-08 00:01:58.415255 | orchestrator | 00:01:58.415 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-08 00:01:58.420739 | orchestrator | 00:01:58.420 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-08 00:01:58.424509 | orchestrator | 00:01:58.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-08 00:01:58.868480 | orchestrator | 00:01:58.868 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-08 00:01:58.873675 | orchestrator | 00:01:58.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-08 00:01:58.882812 | orchestrator | 00:01:58.882 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-08 00:01:58.888159 | orchestrator | 00:01:58.887 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-08 00:01:58.972037 | orchestrator | 00:01:58.971 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-08 00:01:58.979214 | orchestrator | 00:01:58.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-08 00:01:59.540188 | orchestrator | 00:01:59.532 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 2s [id=341b3dac-69c8-44c2-9419-4471ffc88f44] 2025-09-08 00:01:59.551041 | orchestrator | 00:01:59.550 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-08 00:02:02.054851 | orchestrator | 00:02:02.054 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=59c5476b-d42d-4c70-8df0-eefae278ca55] 2025-09-08 00:02:02.061136 | orchestrator | 00:02:02.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-08 00:02:02.081731 | orchestrator | 00:02:02.081 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=63bbd3aa-19f1-48b0-9249-561d852b638c] 2025-09-08 00:02:02.087002 | orchestrator | 00:02:02.086 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-08 00:02:02.109601 | orchestrator | 00:02:02.109 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41] 2025-09-08 00:02:02.112400 | orchestrator | 00:02:02.112 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=a654280a-a62d-423c-bf4b-ecfb391ad989] 2025-09-08 00:02:02.114636 | orchestrator | 00:02:02.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-08 00:02:02.115793 | orchestrator | 00:02:02.115 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-08 00:02:02.141545 | orchestrator | 00:02:02.141 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1] 2025-09-08 00:02:02.146979 | orchestrator | 00:02:02.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-08 00:02:02.213940 | orchestrator | 00:02:02.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=17ecbc41-9c45-4ac3-8b64-5422c11ec1e9] 2025-09-08 00:02:02.229230 | orchestrator | 00:02:02.229 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-08 00:02:02.371958 | orchestrator | 00:02:02.371 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=db00b734-b58e-4932-8acd-6a266572e733] 2025-09-08 00:02:02.390876 | orchestrator | 00:02:02.390 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-08 00:02:02.392182 | orchestrator | 00:02:02.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=4b92dc1e-8c5d-4e7b-ac22-fcae021763ab] 2025-09-08 00:02:02.397371 | orchestrator | 00:02:02.397 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=8e4debf0f95406b0d40ea6377ed83e7b52bb8daa] 2025-09-08 00:02:02.400148 | orchestrator | 00:02:02.399 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-08 00:02:02.405831 | orchestrator | 00:02:02.405 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-08 00:02:02.414452 | orchestrator | 00:02:02.414 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=e1c76e61d27219f7be2405f809ec1b18c6c3fc7b] 2025-09-08 00:02:02.450384 | orchestrator | 00:02:02.449 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=d4ba40c0-17ae-4bff-a3cd-012c30b3474e] 2025-09-08 00:02:02.912006 | orchestrator | 00:02:02.911 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=352591b1-afb2-4164-a476-424b5209d609] 2025-09-08 00:02:03.514969 | orchestrator | 00:02:03.514 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=121f5eee-4b96-484e-9f9a-c477bf2170b4] 2025-09-08 00:02:03.522515 | orchestrator | 00:02:03.522 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-08 00:02:05.505277 | orchestrator | 00:02:05.504 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=533618ca-ba76-4ec8-ae9d-a2b6607e0691] 2025-09-08 00:02:05.538367 | orchestrator | 00:02:05.537 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=bf5ec338-9407-4616-bedc-d4200aedf8a3] 2025-09-08 00:02:05.555641 | orchestrator | 00:02:05.555 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=e92d3e52-3850-4577-a26e-7745eca46ff8] 2025-09-08 00:02:05.561985 | orchestrator | 00:02:05.561 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=2e113876-c434-4dec-99d9-345ed786448b] 2025-09-08 00:02:05.598850 | orchestrator | 00:02:05.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=105ff901-f58d-459b-b46a-1fffc4887b06] 2025-09-08 00:02:05.733102 | orchestrator | 00:02:05.732 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=e7ccd3a5-2d49-497e-92e0-3bd3b69de123] 2025-09-08 00:02:08.430699 | orchestrator | 00:02:08.430 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=df4d5897-2679-4b09-9750-1f23429d49a9] 2025-09-08 00:02:08.437660 | orchestrator | 00:02:08.437 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-08 00:02:08.438926 | orchestrator | 00:02:08.438 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-08 00:02:08.438987 | orchestrator | 00:02:08.438 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-08 00:02:08.650167 | orchestrator | 00:02:08.648 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=b8634557-e4cd-46da-8de6-447a4aea78ee] 2025-09-08 00:02:08.660245 | orchestrator | 00:02:08.660 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-08 00:02:08.667873 | orchestrator | 00:02:08.665 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-08 00:02:08.671540 | orchestrator | 00:02:08.670 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-08 00:02:08.671581 | orchestrator | 00:02:08.671 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-08 00:02:08.671586 | orchestrator | 00:02:08.671 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-08 00:02:08.673624 | orchestrator | 00:02:08.673 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-08 00:02:08.765620 | orchestrator | 00:02:08.765 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=54b918a8-c549-4f45-9905-5ec217b8a6ef] 2025-09-08 00:02:08.773042 | orchestrator | 00:02:08.772 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-08 00:02:08.777035 | orchestrator | 00:02:08.776 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-08 00:02:08.785579 | orchestrator | 00:02:08.785 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-08 00:02:08.830836 | orchestrator | 00:02:08.830 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=5742178a-0772-4d40-a10d-88329d86e7a2] 2025-09-08 00:02:08.843646 | orchestrator | 00:02:08.843 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-08 00:02:08.989039 | orchestrator | 00:02:08.988 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=63e15955-ae2f-4a37-9196-c1e1ec22da89] 2025-09-08 00:02:09.004541 | orchestrator | 00:02:09.004 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-08 00:02:09.166714 | orchestrator | 00:02:09.166 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=2e45b7ac-f0e8-4f0a-ad44-23c8d97612ef] 2025-09-08 00:02:09.170891 | orchestrator | 00:02:09.170 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=93ffdb13-bc94-4b41-96bf-87bb42147723] 2025-09-08 00:02:09.182612 | orchestrator | 00:02:09.180 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-08 00:02:09.183476 | orchestrator | 00:02:09.183 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-08 00:02:09.507505 | orchestrator | 00:02:09.507 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b204c0a3-7069-4bad-abb4-9037080b822a] 2025-09-08 00:02:09.521167 | orchestrator | 00:02:09.520 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-08 00:02:09.878437 | orchestrator | 00:02:09.878 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=fded8601-6d22-4287-b0b4-c0c6b716401c] 2025-09-08 00:02:09.885788 | orchestrator | 00:02:09.885 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-08 00:02:09.905655 | orchestrator | 00:02:09.905 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=4d90c341-5202-41c4-8a1e-f1110bdff4da] 2025-09-08 00:02:09.911225 | orchestrator | 00:02:09.911 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-08 00:02:10.081094 | orchestrator | 00:02:10.080 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=41e7f526-5170-4059-8f96-80668bf02b09] 2025-09-08 00:02:10.169511 | orchestrator | 00:02:10.169 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=139cc141-8094-4d0a-bc7a-804a6166519f] 2025-09-08 00:02:10.198292 | orchestrator | 00:02:10.197 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=beb070c3-ff58-415f-acc5-688a6595c163] 2025-09-08 00:02:10.236391 | orchestrator | 00:02:10.235 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=87e4445c-affc-4da5-a392-0d55edb1861d] 2025-09-08 00:02:10.603861 | orchestrator | 00:02:10.603 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=c5af814f-0bda-4954-9ce9-3a678d598e7e] 2025-09-08 00:02:10.728653 | orchestrator | 00:02:10.728 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=2e8c84a4-ad65-4e2f-a458-6f490a770a54] 2025-09-08 00:02:10.729511 | orchestrator | 00:02:10.729 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=3a6c2b5a-3fb9-48d7-9f14-7db3ae2b07c6] 2025-09-08 00:02:10.994365 | orchestrator | 00:02:10.993 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=15d9a8eb-0648-453e-925b-b73ee12c20c4] 2025-09-08 00:02:10.999836 | orchestrator | 00:02:10.999 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-08 00:02:11.232767 | orchestrator | 00:02:11.232 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=93da8d7c-016f-4930-a987-38ba4452b2c5] 2025-09-08 00:02:11.280508 | orchestrator | 00:02:11.280 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=ac9a5a82-2c38-42e2-a66e-767fcbdd0623] 2025-09-08 00:02:11.308809 | orchestrator | 00:02:11.308 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-08 00:02:11.319472 | orchestrator | 00:02:11.319 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-08 00:02:11.319518 | orchestrator | 00:02:11.319 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-08 00:02:11.323706 | orchestrator | 00:02:11.323 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-08 00:02:11.329074 | orchestrator | 00:02:11.328 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-08 00:02:11.330652 | orchestrator | 00:02:11.330 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-08 00:02:12.595576 | orchestrator | 00:02:12.595 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=10380c1f-1544-4b52-985c-97d116fe6199] 2025-09-08 00:02:12.601857 | orchestrator | 00:02:12.601 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-08 00:02:12.609297 | orchestrator | 00:02:12.609 STDOUT terraform: local_file.inventory: Creating... 2025-09-08 00:02:12.610235 | orchestrator | 00:02:12.610 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-08 00:02:12.620250 | orchestrator | 00:02:12.620 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=fb3ce267d2f34bdd12756751b274e57daa2def0e] 2025-09-08 00:02:12.620794 | orchestrator | 00:02:12.620 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6e42bf1fc3a10ed2f038b108cb1d5fd11c014f98] 2025-09-08 00:02:13.689097 | orchestrator | 00:02:13.688 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=10380c1f-1544-4b52-985c-97d116fe6199] 2025-09-08 00:02:21.311274 | orchestrator | 00:02:21.310 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-08 00:02:21.320287 | orchestrator | 00:02:21.320 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-08 00:02:21.320399 | orchestrator | 00:02:21.320 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-08 00:02:21.329509 | orchestrator | 00:02:21.329 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-08 00:02:21.331869 | orchestrator | 00:02:21.331 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-08 00:02:21.331959 | orchestrator | 00:02:21.331 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-08 00:02:31.311890 | orchestrator | 00:02:31.311 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-08 00:02:31.321014 | orchestrator | 00:02:31.320 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-08 00:02:31.321126 | orchestrator | 00:02:31.320 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-08 00:02:31.330243 | orchestrator | 00:02:31.329 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-08 00:02:31.332503 | orchestrator | 00:02:31.332 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-08 00:02:31.332575 | orchestrator | 00:02:31.332 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-08 00:02:31.926642 | orchestrator | 00:02:31.926 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=a4ebd08e-2a5f-49d9-8ef0-66d5fbed9be2] 2025-09-08 00:02:41.312093 | orchestrator | 00:02:41.311 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-08 00:02:41.321292 | orchestrator | 00:02:41.320 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-08 00:02:41.331336 | orchestrator | 00:02:41.331 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-08 00:02:41.333596 | orchestrator | 00:02:41.333 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-08 00:02:41.333684 | orchestrator | 00:02:41.333 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-08 00:02:42.114694 | orchestrator | 00:02:42.114 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=899f5231-9f35-44df-95eb-7ff0f47ed587] 2025-09-08 00:02:42.126586 | orchestrator | 00:02:42.126 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=159e0ffa-2965-4799-9569-8e001cb527ca] 2025-09-08 00:02:42.268483 | orchestrator | 00:02:42.268 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=28a05416-1c2c-4ebb-aae8-3d82afc85e73] 2025-09-08 00:02:42.705527 | orchestrator | 00:02:42.703 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=d00638e9-d28a-4ff1-b432-bb350940346c] 2025-09-08 00:02:51.334447 | orchestrator | 00:02:51.333 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-09-08 00:02:52.198000 | orchestrator | 00:02:52.197 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=06aae281-31ef-4a83-ab39-e0a5a4c53bfc] 2025-09-08 00:02:52.225287 | orchestrator | 00:02:52.225 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-08 00:02:52.231064 | orchestrator | 00:02:52.230 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-08 00:02:52.232584 | orchestrator | 00:02:52.232 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2389174820382880053] 2025-09-08 00:02:52.235452 | orchestrator | 00:02:52.235 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-08 00:02:52.240546 | orchestrator | 00:02:52.240 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-08 00:02:52.246665 | orchestrator | 00:02:52.246 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-08 00:02:52.255053 | orchestrator | 00:02:52.254 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-08 00:02:52.255285 | orchestrator | 00:02:52.255 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-08 00:02:52.256309 | orchestrator | 00:02:52.256 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-08 00:02:52.258860 | orchestrator | 00:02:52.258 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-08 00:02:52.259809 | orchestrator | 00:02:52.259 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-08 00:02:52.276008 | orchestrator | 00:02:52.275 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-08 00:02:55.739427 | orchestrator | 00:02:55.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=28a05416-1c2c-4ebb-aae8-3d82afc85e73/d4ba40c0-17ae-4bff-a3cd-012c30b3474e] 2025-09-08 00:02:55.763220 | orchestrator | 00:02:55.762 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=d00638e9-d28a-4ff1-b432-bb350940346c/17ecbc41-9c45-4ac3-8b64-5422c11ec1e9] 2025-09-08 00:02:55.770624 | orchestrator | 00:02:55.770 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=899f5231-9f35-44df-95eb-7ff0f47ed587/1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41] 2025-09-08 00:02:55.802519 | orchestrator | 00:02:55.802 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=899f5231-9f35-44df-95eb-7ff0f47ed587/8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1] 2025-09-08 00:02:55.803789 | orchestrator | 00:02:55.803 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=d00638e9-d28a-4ff1-b432-bb350940346c/63bbd3aa-19f1-48b0-9249-561d852b638c] 2025-09-08 00:02:55.822836 | orchestrator | 00:02:55.822 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=28a05416-1c2c-4ebb-aae8-3d82afc85e73/59c5476b-d42d-4c70-8df0-eefae278ca55] 2025-09-08 00:03:01.889585 | orchestrator | 00:03:01.889 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=d00638e9-d28a-4ff1-b432-bb350940346c/a654280a-a62d-423c-bf4b-ecfb391ad989] 2025-09-08 00:03:01.919331 | orchestrator | 00:03:01.918 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=28a05416-1c2c-4ebb-aae8-3d82afc85e73/4b92dc1e-8c5d-4e7b-ac22-fcae021763ab] 2025-09-08 00:03:01.946776 | orchestrator | 00:03:01.946 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=899f5231-9f35-44df-95eb-7ff0f47ed587/db00b734-b58e-4932-8acd-6a266572e733] 2025-09-08 00:03:02.278587 | orchestrator | 00:03:02.278 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-08 00:03:12.279091 | orchestrator | 00:03:12.278 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-08 00:03:12.821364 | orchestrator | 00:03:12.820 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=808fe4e4-60bb-4e52-8534-c70ff984925d] 2025-09-08 00:03:12.851509 | orchestrator | 00:03:12.851 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-08 00:03:12.851621 | orchestrator | 00:03:12.851 STDOUT terraform: Outputs: 2025-09-08 00:03:12.851641 | orchestrator | 00:03:12.851 STDOUT terraform: manager_address = 2025-09-08 00:03:12.851653 | orchestrator | 00:03:12.851 STDOUT terraform: private_key = 2025-09-08 00:03:12.972454 | orchestrator | ok: Runtime: 0:01:23.060229 2025-09-08 00:03:13.006121 | 2025-09-08 00:03:13.006261 | TASK [Create infrastructure (stable)] 2025-09-08 00:03:13.542125 | orchestrator | skipping: Conditional result was False 2025-09-08 00:03:13.555669 | 2025-09-08 00:03:13.555913 | TASK [Fetch manager address] 2025-09-08 00:03:13.973930 | orchestrator | ok 2025-09-08 00:03:13.984351 | 2025-09-08 00:03:13.984482 | TASK [Set manager_host address] 2025-09-08 00:03:14.063652 | orchestrator | ok 2025-09-08 00:03:14.073098 | 2025-09-08 00:03:14.073228 | LOOP [Update ansible collections] 2025-09-08 00:03:24.003350 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:03:24.003772 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-08 00:03:24.003830 | orchestrator | Starting galaxy collection install process 2025-09-08 00:03:24.003865 | orchestrator | Process install dependency map 2025-09-08 00:03:24.003897 | orchestrator | Starting collection install process 2025-09-08 00:03:24.003925 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-09-08 00:03:24.003962 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-09-08 00:03:24.004000 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-08 00:03:24.004071 | orchestrator | ok: Item: commons Runtime: 0:00:09.653161 2025-09-08 00:03:30.413414 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:03:30.413639 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-08 00:03:30.413687 | orchestrator | Starting galaxy collection install process 2025-09-08 00:03:30.413715 | orchestrator | Process install dependency map 2025-09-08 00:03:30.413742 | orchestrator | Starting collection install process 2025-09-08 00:03:30.413766 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-09-08 00:03:30.413791 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-09-08 00:03:30.413814 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-08 00:03:30.413854 | orchestrator | ok: Item: services Runtime: 0:00:06.157067 2025-09-08 00:03:30.435899 | 2025-09-08 00:03:30.436025 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-08 00:03:40.958453 | orchestrator | ok 2025-09-08 00:03:40.970136 | 2025-09-08 00:03:40.970289 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-08 00:04:41.007438 | orchestrator | ok 2025-09-08 00:04:41.015891 | 2025-09-08 00:04:41.016002 | TASK [Fetch manager ssh hostkey] 2025-09-08 00:04:42.595396 | orchestrator | Output suppressed because no_log was given 2025-09-08 00:04:42.603003 | 2025-09-08 00:04:42.603132 | TASK [Get ssh keypair from terraform environment] 2025-09-08 00:04:43.133894 | orchestrator | ok: Runtime: 0:00:00.008618 2025-09-08 00:04:43.146977 | 2025-09-08 00:04:43.147154 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-08 00:04:43.193302 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-08 00:04:43.204952 | 2025-09-08 00:04:43.205116 | TASK [Run manager part 0] 2025-09-08 00:04:46.104740 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:04:46.335043 | orchestrator | 2025-09-08 00:04:46.335103 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-08 00:04:46.335112 | orchestrator | 2025-09-08 00:04:46.335127 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-08 00:04:48.901965 | orchestrator | ok: [testbed-manager] 2025-09-08 00:04:48.902159 | orchestrator | 2025-09-08 00:04:48.902215 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-08 00:04:48.902239 | orchestrator | 2025-09-08 00:04:48.902260 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:04:51.193970 | orchestrator | ok: [testbed-manager] 2025-09-08 00:04:51.194124 | orchestrator | 2025-09-08 00:04:51.194136 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-08 00:04:51.801037 | orchestrator | ok: [testbed-manager] 2025-09-08 00:04:51.801089 | orchestrator | 2025-09-08 00:04:51.801099 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-08 00:04:51.845042 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.845082 | orchestrator | 2025-09-08 00:04:51.845091 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-08 00:04:51.866284 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.866318 | orchestrator | 2025-09-08 00:04:51.866325 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-08 00:04:51.890471 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.890504 | orchestrator | 2025-09-08 00:04:51.890510 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-08 00:04:51.915122 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.915153 | orchestrator | 2025-09-08 00:04:51.915159 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-08 00:04:51.940253 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.940286 | orchestrator | 2025-09-08 00:04:51.940294 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-08 00:04:51.969181 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.969230 | orchestrator | 2025-09-08 00:04:51.969241 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-08 00:04:52.010244 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:52.010284 | orchestrator | 2025-09-08 00:04:52.010292 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-08 00:04:52.734086 | orchestrator | changed: [testbed-manager] 2025-09-08 00:04:52.734132 | orchestrator | 2025-09-08 00:04:52.734139 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-08 00:07:42.630389 | orchestrator | changed: [testbed-manager] 2025-09-08 00:07:42.630513 | orchestrator | 2025-09-08 00:07:42.630534 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-08 00:09:16.843718 | orchestrator | changed: [testbed-manager] 2025-09-08 00:09:16.843817 | orchestrator | 2025-09-08 00:09:16.843833 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-08 00:09:38.756724 | orchestrator | changed: [testbed-manager] 2025-09-08 00:09:38.756808 | orchestrator | 2025-09-08 00:09:38.756824 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-08 00:09:49.634812 | orchestrator | changed: [testbed-manager] 2025-09-08 00:09:49.634909 | orchestrator | 2025-09-08 00:09:49.634927 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-08 00:09:49.681889 | orchestrator | ok: [testbed-manager] 2025-09-08 00:09:49.681955 | orchestrator | 2025-09-08 00:09:49.681969 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-08 00:09:50.459187 | orchestrator | ok: [testbed-manager] 2025-09-08 00:09:50.459271 | orchestrator | 2025-09-08 00:09:50.459290 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-08 00:09:51.193329 | orchestrator | changed: [testbed-manager] 2025-09-08 00:09:51.193414 | orchestrator | 2025-09-08 00:09:51.193434 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-08 00:09:58.068362 | orchestrator | changed: [testbed-manager] 2025-09-08 00:09:58.068459 | orchestrator | 2025-09-08 00:09:58.068495 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-08 00:10:04.077948 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:04.077999 | orchestrator | 2025-09-08 00:10:04.078010 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-08 00:10:06.707913 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:06.707969 | orchestrator | 2025-09-08 00:10:06.707983 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-08 00:10:08.524674 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:08.524754 | orchestrator | 2025-09-08 00:10:08.524771 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-08 00:10:09.606605 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-08 00:10:09.606678 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-08 00:10:09.606693 | orchestrator | 2025-09-08 00:10:09.606705 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-08 00:10:09.647399 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-08 00:10:09.647431 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-08 00:10:09.647436 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-08 00:10:09.647441 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-08 00:10:14.814765 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-08 00:10:14.814859 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-08 00:10:14.814875 | orchestrator | 2025-09-08 00:10:14.814887 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-08 00:10:15.380617 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:15.380700 | orchestrator | 2025-09-08 00:10:15.380718 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-08 00:10:38.001236 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-08 00:10:38.001320 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-08 00:10:38.001333 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-08 00:10:38.001343 | orchestrator | 2025-09-08 00:10:38.001353 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-08 00:10:40.276465 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-08 00:10:40.276572 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-08 00:10:40.276587 | orchestrator | 2025-09-08 00:10:40.276600 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-08 00:10:40.276613 | orchestrator | 2025-09-08 00:10:40.276624 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:10:41.735017 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:41.735101 | orchestrator | 2025-09-08 00:10:41.735118 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-08 00:10:41.783629 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:41.783704 | orchestrator | 2025-09-08 00:10:41.783725 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-08 00:10:41.851482 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:41.851608 | orchestrator | 2025-09-08 00:10:41.851628 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-08 00:10:42.627838 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:42.627932 | orchestrator | 2025-09-08 00:10:42.627949 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-08 00:10:44.668695 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:44.668783 | orchestrator | 2025-09-08 00:10:44.668800 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-08 00:10:46.067026 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-08 00:10:46.067117 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-08 00:10:46.067134 | orchestrator | 2025-09-08 00:10:46.067162 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-08 00:10:47.474272 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:47.474387 | orchestrator | 2025-09-08 00:10:47.474407 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-08 00:10:49.514923 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:10:49.515007 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-08 00:10:49.515021 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:10:49.515032 | orchestrator | 2025-09-08 00:10:49.515045 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-08 00:10:49.567564 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:49.567633 | orchestrator | 2025-09-08 00:10:49.567648 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-08 00:10:50.136273 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:50.136364 | orchestrator | 2025-09-08 00:10:50.136382 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-08 00:10:50.204242 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:50.204307 | orchestrator | 2025-09-08 00:10:50.204321 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-08 00:10:51.064435 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:10:51.064477 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:51.064486 | orchestrator | 2025-09-08 00:10:51.064492 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-08 00:10:51.097607 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:51.097643 | orchestrator | 2025-09-08 00:10:51.097651 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-08 00:10:51.125721 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:51.125853 | orchestrator | 2025-09-08 00:10:51.125865 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-08 00:10:51.157674 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:51.157711 | orchestrator | 2025-09-08 00:10:51.157720 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-08 00:10:51.210901 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:51.210937 | orchestrator | 2025-09-08 00:10:51.210948 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-08 00:10:51.923719 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:51.923758 | orchestrator | 2025-09-08 00:10:51.923765 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-08 00:10:51.923769 | orchestrator | 2025-09-08 00:10:51.923773 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:10:53.373448 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:53.374512 | orchestrator | 2025-09-08 00:10:53.374588 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-08 00:10:54.397787 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:54.397859 | orchestrator | 2025-09-08 00:10:54.397874 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:10:54.397887 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-08 00:10:54.397899 | orchestrator | 2025-09-08 00:10:54.983539 | orchestrator | ok: Runtime: 0:06:10.968936 2025-09-08 00:10:55.004601 | 2025-09-08 00:10:55.004754 | TASK [Point out that the log in on the manager is now possible] 2025-09-08 00:10:55.052217 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-08 00:10:55.061348 | 2025-09-08 00:10:55.061484 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-08 00:10:55.106626 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-08 00:10:55.115728 | 2025-09-08 00:10:55.115849 | TASK [Run manager part 1 + 2] 2025-09-08 00:10:55.924327 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:10:55.979965 | orchestrator | 2025-09-08 00:10:55.980051 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-08 00:10:55.980069 | orchestrator | 2025-09-08 00:10:55.980095 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:10:58.558995 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:58.559086 | orchestrator | 2025-09-08 00:10:58.559139 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-08 00:10:58.590450 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:58.590528 | orchestrator | 2025-09-08 00:10:58.590566 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-08 00:10:58.630351 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:58.630425 | orchestrator | 2025-09-08 00:10:58.630442 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-08 00:10:58.670377 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:58.670447 | orchestrator | 2025-09-08 00:10:58.670467 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-08 00:10:58.738263 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:58.738333 | orchestrator | 2025-09-08 00:10:58.738349 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-08 00:10:58.805616 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:58.805663 | orchestrator | 2025-09-08 00:10:58.805671 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-08 00:10:58.847582 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-08 00:10:58.847631 | orchestrator | 2025-09-08 00:10:58.847639 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-08 00:10:59.554039 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:59.554093 | orchestrator | 2025-09-08 00:10:59.554102 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-08 00:10:59.600825 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:10:59.600875 | orchestrator | 2025-09-08 00:10:59.600884 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-08 00:11:00.953740 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:00.953877 | orchestrator | 2025-09-08 00:11:00.953896 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-08 00:11:01.534931 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:01.535025 | orchestrator | 2025-09-08 00:11:01.535042 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-08 00:11:02.683673 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:02.683734 | orchestrator | 2025-09-08 00:11:02.683751 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-08 00:11:19.522737 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:19.522827 | orchestrator | 2025-09-08 00:11:19.522845 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-08 00:11:20.185974 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:20.186062 | orchestrator | 2025-09-08 00:11:20.186081 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-08 00:11:20.236873 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:20.236945 | orchestrator | 2025-09-08 00:11:20.236959 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-08 00:11:21.215297 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:21.215354 | orchestrator | 2025-09-08 00:11:21.215368 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-08 00:11:22.181074 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:22.181108 | orchestrator | 2025-09-08 00:11:22.181116 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-08 00:11:22.732436 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:22.732522 | orchestrator | 2025-09-08 00:11:22.732538 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-08 00:11:22.770459 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-08 00:11:22.770590 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-08 00:11:22.770607 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-08 00:11:22.770619 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-08 00:11:25.381103 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:25.381199 | orchestrator | 2025-09-08 00:11:25.381217 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-08 00:11:34.668442 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-08 00:11:34.668531 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-08 00:11:34.668570 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-08 00:11:34.668583 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-08 00:11:34.668602 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-08 00:11:34.668613 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-08 00:11:34.668624 | orchestrator | 2025-09-08 00:11:34.668637 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-08 00:11:35.687517 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:35.687591 | orchestrator | 2025-09-08 00:11:35.687600 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-08 00:11:35.732097 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:35.732141 | orchestrator | 2025-09-08 00:11:35.732150 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-08 00:11:38.818419 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:38.818531 | orchestrator | 2025-09-08 00:11:38.818586 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-08 00:11:38.859023 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:38.859099 | orchestrator | 2025-09-08 00:11:38.859114 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-08 00:13:17.312349 | orchestrator | changed: [testbed-manager] 2025-09-08 00:13:17.312476 | orchestrator | 2025-09-08 00:13:17.312496 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-08 00:13:18.495409 | orchestrator | ok: [testbed-manager] 2025-09-08 00:13:18.495498 | orchestrator | 2025-09-08 00:13:18.495515 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:13:18.495530 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-08 00:13:18.495541 | orchestrator | 2025-09-08 00:13:18.730978 | orchestrator | ok: Runtime: 0:02:23.180471 2025-09-08 00:13:18.746110 | 2025-09-08 00:13:18.746253 | TASK [Reboot manager] 2025-09-08 00:13:20.282573 | orchestrator | ok: Runtime: 0:00:00.966578 2025-09-08 00:13:20.299051 | 2025-09-08 00:13:20.299189 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-08 00:13:36.736717 | orchestrator | ok 2025-09-08 00:13:36.748154 | 2025-09-08 00:13:36.748405 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-08 00:14:36.789783 | orchestrator | ok 2025-09-08 00:14:36.800330 | 2025-09-08 00:14:36.800475 | TASK [Deploy manager + bootstrap nodes] 2025-09-08 00:14:39.557036 | orchestrator | 2025-09-08 00:14:39.557334 | orchestrator | # DEPLOY MANAGER 2025-09-08 00:14:39.557365 | orchestrator | 2025-09-08 00:14:39.557380 | orchestrator | + set -e 2025-09-08 00:14:39.557393 | orchestrator | + echo 2025-09-08 00:14:39.557408 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-08 00:14:39.557426 | orchestrator | + echo 2025-09-08 00:14:39.557478 | orchestrator | + cat /opt/manager-vars.sh 2025-09-08 00:14:39.560561 | orchestrator | export NUMBER_OF_NODES=6 2025-09-08 00:14:39.560586 | orchestrator | 2025-09-08 00:14:39.560598 | orchestrator | export CEPH_VERSION=reef 2025-09-08 00:14:39.560612 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-08 00:14:39.560655 | orchestrator | export MANAGER_VERSION=latest 2025-09-08 00:14:39.560677 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-08 00:14:39.560688 | orchestrator | 2025-09-08 00:14:39.560707 | orchestrator | export ARA=false 2025-09-08 00:14:39.560718 | orchestrator | export DEPLOY_MODE=manager 2025-09-08 00:14:39.560736 | orchestrator | export TEMPEST=true 2025-09-08 00:14:39.560748 | orchestrator | export IS_ZUUL=true 2025-09-08 00:14:39.560759 | orchestrator | 2025-09-08 00:14:39.560777 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.100 2025-09-08 00:14:39.560788 | orchestrator | export EXTERNAL_API=false 2025-09-08 00:14:39.560799 | orchestrator | 2025-09-08 00:14:39.560810 | orchestrator | export IMAGE_USER=ubuntu 2025-09-08 00:14:39.560823 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:39.560834 | orchestrator | 2025-09-08 00:14:39.560845 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-08 00:14:39.560861 | orchestrator | 2025-09-08 00:14:39.560872 | orchestrator | + echo 2025-09-08 00:14:39.560885 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-08 00:14:39.561746 | orchestrator | ++ export INTERACTIVE=false 2025-09-08 00:14:39.561765 | orchestrator | ++ INTERACTIVE=false 2025-09-08 00:14:39.561778 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-08 00:14:39.561790 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-08 00:14:39.561865 | orchestrator | + source /opt/manager-vars.sh 2025-09-08 00:14:39.561880 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-08 00:14:39.561891 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-08 00:14:39.561902 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-08 00:14:39.561913 | orchestrator | ++ CEPH_VERSION=reef 2025-09-08 00:14:39.561928 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-08 00:14:39.561940 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-08 00:14:39.561951 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-08 00:14:39.561962 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-08 00:14:39.561973 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-08 00:14:39.561991 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-08 00:14:39.562002 | orchestrator | ++ export ARA=false 2025-09-08 00:14:39.562053 | orchestrator | ++ ARA=false 2025-09-08 00:14:39.562068 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-08 00:14:39.562078 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-08 00:14:39.562089 | orchestrator | ++ export TEMPEST=true 2025-09-08 00:14:39.562100 | orchestrator | ++ TEMPEST=true 2025-09-08 00:14:39.562116 | orchestrator | ++ export IS_ZUUL=true 2025-09-08 00:14:39.562127 | orchestrator | ++ IS_ZUUL=true 2025-09-08 00:14:39.562137 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.100 2025-09-08 00:14:39.562148 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.100 2025-09-08 00:14:39.562159 | orchestrator | ++ export EXTERNAL_API=false 2025-09-08 00:14:39.562170 | orchestrator | ++ EXTERNAL_API=false 2025-09-08 00:14:39.562180 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-08 00:14:39.562191 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-08 00:14:39.562202 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:39.562213 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:39.562224 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-08 00:14:39.562235 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-08 00:14:39.562250 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-08 00:14:39.650252 | orchestrator | + docker version 2025-09-08 00:14:39.961975 | orchestrator | Client: Docker Engine - Community 2025-09-08 00:14:39.962079 | orchestrator | Version: 27.5.1 2025-09-08 00:14:39.962094 | orchestrator | API version: 1.47 2025-09-08 00:14:39.962107 | orchestrator | Go version: go1.22.11 2025-09-08 00:14:39.962117 | orchestrator | Git commit: 9f9e405 2025-09-08 00:14:39.962128 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-08 00:14:39.962140 | orchestrator | OS/Arch: linux/amd64 2025-09-08 00:14:39.962151 | orchestrator | Context: default 2025-09-08 00:14:39.962162 | orchestrator | 2025-09-08 00:14:39.962173 | orchestrator | Server: Docker Engine - Community 2025-09-08 00:14:39.962184 | orchestrator | Engine: 2025-09-08 00:14:39.962195 | orchestrator | Version: 27.5.1 2025-09-08 00:14:39.962206 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-08 00:14:39.962246 | orchestrator | Go version: go1.22.11 2025-09-08 00:14:39.962258 | orchestrator | Git commit: 4c9b3b0 2025-09-08 00:14:39.962269 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-08 00:14:39.962279 | orchestrator | OS/Arch: linux/amd64 2025-09-08 00:14:39.962290 | orchestrator | Experimental: false 2025-09-08 00:14:39.962301 | orchestrator | containerd: 2025-09-08 00:14:39.962312 | orchestrator | Version: 1.7.27 2025-09-08 00:14:39.962323 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-08 00:14:39.962335 | orchestrator | runc: 2025-09-08 00:14:39.962345 | orchestrator | Version: 1.2.5 2025-09-08 00:14:39.962356 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-08 00:14:39.962367 | orchestrator | docker-init: 2025-09-08 00:14:39.962378 | orchestrator | Version: 0.19.0 2025-09-08 00:14:39.962390 | orchestrator | GitCommit: de40ad0 2025-09-08 00:14:39.966150 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-08 00:14:39.977212 | orchestrator | + set -e 2025-09-08 00:14:39.977230 | orchestrator | + source /opt/manager-vars.sh 2025-09-08 00:14:39.977243 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-08 00:14:39.977255 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-08 00:14:39.977266 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-08 00:14:39.977277 | orchestrator | ++ CEPH_VERSION=reef 2025-09-08 00:14:39.977288 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-08 00:14:39.977299 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-08 00:14:39.977520 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-08 00:14:39.977648 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-08 00:14:39.977667 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-08 00:14:39.977680 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-08 00:14:39.977691 | orchestrator | ++ export ARA=false 2025-09-08 00:14:39.977702 | orchestrator | ++ ARA=false 2025-09-08 00:14:39.977713 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-08 00:14:39.977727 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-08 00:14:39.977738 | orchestrator | ++ export TEMPEST=true 2025-09-08 00:14:39.977748 | orchestrator | ++ TEMPEST=true 2025-09-08 00:14:39.977759 | orchestrator | ++ export IS_ZUUL=true 2025-09-08 00:14:39.977770 | orchestrator | ++ IS_ZUUL=true 2025-09-08 00:14:39.977781 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.100 2025-09-08 00:14:39.977793 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.100 2025-09-08 00:14:39.977804 | orchestrator | ++ export EXTERNAL_API=false 2025-09-08 00:14:39.977814 | orchestrator | ++ EXTERNAL_API=false 2025-09-08 00:14:39.977825 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-08 00:14:39.977836 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-08 00:14:39.977847 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:39.977857 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:39.977869 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-08 00:14:39.977880 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-08 00:14:39.977891 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-08 00:14:39.977902 | orchestrator | ++ export INTERACTIVE=false 2025-09-08 00:14:39.977912 | orchestrator | ++ INTERACTIVE=false 2025-09-08 00:14:39.977923 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-08 00:14:39.977939 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-08 00:14:39.977965 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-08 00:14:39.977977 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-08 00:14:39.977988 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-08 00:14:39.984239 | orchestrator | + set -e 2025-09-08 00:14:39.984278 | orchestrator | + VERSION=reef 2025-09-08 00:14:39.985419 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-08 00:14:39.992709 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-08 00:14:39.992765 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-08 00:14:40.001403 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-08 00:14:40.008597 | orchestrator | + set -e 2025-09-08 00:14:40.008671 | orchestrator | + VERSION=2024.2 2025-09-08 00:14:40.009795 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-08 00:14:40.013921 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-08 00:14:40.013977 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-08 00:14:40.020060 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-08 00:14:40.021612 | orchestrator | ++ semver latest 7.0.0 2025-09-08 00:14:40.092654 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-08 00:14:40.092757 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-08 00:14:40.092772 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-08 00:14:40.092785 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-08 00:14:40.189096 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-08 00:14:40.190554 | orchestrator | + source /opt/venv/bin/activate 2025-09-08 00:14:40.191299 | orchestrator | ++ deactivate nondestructive 2025-09-08 00:14:40.191318 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:40.191331 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:40.191345 | orchestrator | ++ hash -r 2025-09-08 00:14:40.191362 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:40.191375 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-08 00:14:40.191389 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-08 00:14:40.191403 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-08 00:14:40.191417 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-08 00:14:40.191433 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-08 00:14:40.191446 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-08 00:14:40.191464 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-08 00:14:40.191477 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:14:40.191491 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:14:40.191827 | orchestrator | ++ export PATH 2025-09-08 00:14:40.191844 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:40.191855 | orchestrator | ++ '[' -z '' ']' 2025-09-08 00:14:40.191867 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-08 00:14:40.191887 | orchestrator | ++ PS1='(venv) ' 2025-09-08 00:14:40.191907 | orchestrator | ++ export PS1 2025-09-08 00:14:40.191926 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-08 00:14:40.191945 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-08 00:14:40.191963 | orchestrator | ++ hash -r 2025-09-08 00:14:40.192009 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-08 00:14:41.532924 | orchestrator | 2025-09-08 00:14:41.533038 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-08 00:14:41.533056 | orchestrator | 2025-09-08 00:14:41.533068 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-08 00:14:42.116538 | orchestrator | ok: [testbed-manager] 2025-09-08 00:14:42.116696 | orchestrator | 2025-09-08 00:14:42.116715 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-08 00:14:43.118895 | orchestrator | changed: [testbed-manager] 2025-09-08 00:14:43.119019 | orchestrator | 2025-09-08 00:14:43.119037 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-08 00:14:43.119051 | orchestrator | 2025-09-08 00:14:43.119063 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:14:46.421110 | orchestrator | ok: [testbed-manager] 2025-09-08 00:14:46.421233 | orchestrator | 2025-09-08 00:14:46.421252 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-08 00:14:46.487191 | orchestrator | ok: [testbed-manager] 2025-09-08 00:14:46.487261 | orchestrator | 2025-09-08 00:14:46.487280 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-08 00:14:46.985214 | orchestrator | changed: [testbed-manager] 2025-09-08 00:14:46.985324 | orchestrator | 2025-09-08 00:14:46.985340 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-08 00:14:47.018836 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:14:47.018893 | orchestrator | 2025-09-08 00:14:47.018907 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-08 00:14:47.367525 | orchestrator | changed: [testbed-manager] 2025-09-08 00:14:47.367666 | orchestrator | 2025-09-08 00:14:47.367683 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-08 00:14:47.425797 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:14:47.425894 | orchestrator | 2025-09-08 00:14:47.425911 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-08 00:14:47.769380 | orchestrator | ok: [testbed-manager] 2025-09-08 00:14:47.769469 | orchestrator | 2025-09-08 00:14:47.769481 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-08 00:14:47.891766 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:14:47.891853 | orchestrator | 2025-09-08 00:14:47.891868 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-08 00:14:47.891880 | orchestrator | 2025-09-08 00:14:47.891894 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:14:50.618978 | orchestrator | ok: [testbed-manager] 2025-09-08 00:14:50.619087 | orchestrator | 2025-09-08 00:14:50.619103 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-08 00:14:50.714444 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-08 00:14:50.714513 | orchestrator | 2025-09-08 00:14:50.714527 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-08 00:14:50.775960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-08 00:14:50.776019 | orchestrator | 2025-09-08 00:14:50.776035 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-08 00:14:51.899780 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-08 00:14:51.899888 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-08 00:14:51.899903 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-08 00:14:51.899915 | orchestrator | 2025-09-08 00:14:51.899928 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-08 00:14:53.751345 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-08 00:14:53.751455 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-08 00:14:53.751474 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-08 00:14:53.751487 | orchestrator | 2025-09-08 00:14:53.751500 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-08 00:14:54.379920 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:14:54.380033 | orchestrator | changed: [testbed-manager] 2025-09-08 00:14:54.380050 | orchestrator | 2025-09-08 00:14:54.380063 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-08 00:14:55.041470 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:14:55.041560 | orchestrator | changed: [testbed-manager] 2025-09-08 00:14:55.041575 | orchestrator | 2025-09-08 00:14:55.041586 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-08 00:14:55.092406 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:14:55.092433 | orchestrator | 2025-09-08 00:14:55.092446 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-08 00:14:55.453592 | orchestrator | ok: [testbed-manager] 2025-09-08 00:14:55.453713 | orchestrator | 2025-09-08 00:14:55.453727 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-08 00:14:55.539553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-08 00:14:55.539612 | orchestrator | 2025-09-08 00:14:55.539653 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-08 00:14:56.606398 | orchestrator | changed: [testbed-manager] 2025-09-08 00:14:56.606495 | orchestrator | 2025-09-08 00:14:56.606510 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-08 00:14:57.473072 | orchestrator | changed: [testbed-manager] 2025-09-08 00:14:57.473175 | orchestrator | 2025-09-08 00:14:57.473190 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-08 00:15:09.601602 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:09.601715 | orchestrator | 2025-09-08 00:15:09.601727 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-08 00:15:09.646370 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:09.646434 | orchestrator | 2025-09-08 00:15:09.646456 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-08 00:15:09.646477 | orchestrator | 2025-09-08 00:15:09.646497 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:15:11.455845 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:11.455933 | orchestrator | 2025-09-08 00:15:11.455986 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-08 00:15:11.584950 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-08 00:15:11.585017 | orchestrator | 2025-09-08 00:15:11.585028 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-08 00:15:11.663007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:15:11.663051 | orchestrator | 2025-09-08 00:15:11.663064 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-08 00:15:14.349791 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:14.349901 | orchestrator | 2025-09-08 00:15:14.349916 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-08 00:15:14.403380 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:14.403414 | orchestrator | 2025-09-08 00:15:14.403428 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-08 00:15:14.552561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-08 00:15:14.552598 | orchestrator | 2025-09-08 00:15:14.552609 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-08 00:15:17.475594 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-08 00:15:17.475751 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-08 00:15:17.475767 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-08 00:15:17.475779 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-08 00:15:17.475791 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-08 00:15:17.475802 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-08 00:15:17.475812 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-08 00:15:17.475823 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-08 00:15:17.475834 | orchestrator | 2025-09-08 00:15:17.475846 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-08 00:15:18.133975 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:18.134125 | orchestrator | 2025-09-08 00:15:18.134141 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-08 00:15:18.789714 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:18.789819 | orchestrator | 2025-09-08 00:15:18.789834 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-08 00:15:18.864049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-08 00:15:18.864150 | orchestrator | 2025-09-08 00:15:18.864167 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-08 00:15:20.040952 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-08 00:15:20.041061 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-08 00:15:20.041075 | orchestrator | 2025-09-08 00:15:20.041087 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-08 00:15:20.697712 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:20.697824 | orchestrator | 2025-09-08 00:15:20.697841 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-08 00:15:20.755567 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:20.755687 | orchestrator | 2025-09-08 00:15:20.755704 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-08 00:15:20.854937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-08 00:15:20.855014 | orchestrator | 2025-09-08 00:15:20.855029 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-08 00:15:21.518484 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:21.518585 | orchestrator | 2025-09-08 00:15:21.518599 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-08 00:15:21.604478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-08 00:15:21.604559 | orchestrator | 2025-09-08 00:15:21.604572 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-08 00:15:22.990868 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:15:22.990968 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:15:22.990981 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:22.990992 | orchestrator | 2025-09-08 00:15:22.991002 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-08 00:15:23.642249 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:23.642345 | orchestrator | 2025-09-08 00:15:23.642358 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-08 00:15:23.698070 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:23.698113 | orchestrator | 2025-09-08 00:15:23.698124 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-08 00:15:23.809589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-08 00:15:23.809705 | orchestrator | 2025-09-08 00:15:23.809717 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-08 00:15:24.354365 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:24.354466 | orchestrator | 2025-09-08 00:15:24.354490 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-08 00:15:24.778992 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:24.779100 | orchestrator | 2025-09-08 00:15:24.779117 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-08 00:15:26.046266 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-08 00:15:26.046379 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-08 00:15:26.046395 | orchestrator | 2025-09-08 00:15:26.046408 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-08 00:15:26.737782 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:26.737883 | orchestrator | 2025-09-08 00:15:26.737899 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-08 00:15:27.139298 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:27.139395 | orchestrator | 2025-09-08 00:15:27.139409 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-08 00:15:27.518958 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:27.519056 | orchestrator | 2025-09-08 00:15:27.519070 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-08 00:15:27.568032 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:27.568085 | orchestrator | 2025-09-08 00:15:27.568101 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-08 00:15:27.654511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-08 00:15:27.654544 | orchestrator | 2025-09-08 00:15:27.654556 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-08 00:15:27.692585 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:27.692612 | orchestrator | 2025-09-08 00:15:27.692624 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-08 00:15:29.693122 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-08 00:15:29.693214 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-08 00:15:29.693229 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-08 00:15:29.693241 | orchestrator | 2025-09-08 00:15:29.693253 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-08 00:15:30.402720 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:30.402826 | orchestrator | 2025-09-08 00:15:30.402844 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-08 00:15:31.126723 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:31.126852 | orchestrator | 2025-09-08 00:15:31.126869 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-08 00:15:31.865149 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:31.865257 | orchestrator | 2025-09-08 00:15:31.865272 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-08 00:15:31.944013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-08 00:15:31.944056 | orchestrator | 2025-09-08 00:15:31.944069 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-08 00:15:31.985555 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:31.985586 | orchestrator | 2025-09-08 00:15:31.985597 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-08 00:15:32.723916 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-08 00:15:32.724020 | orchestrator | 2025-09-08 00:15:32.724035 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-08 00:15:32.823518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-08 00:15:32.823604 | orchestrator | 2025-09-08 00:15:32.823618 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-08 00:15:33.536162 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:33.536236 | orchestrator | 2025-09-08 00:15:33.536243 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-08 00:15:34.126355 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:34.126463 | orchestrator | 2025-09-08 00:15:34.126479 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-08 00:15:34.184907 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:34.184990 | orchestrator | 2025-09-08 00:15:34.185012 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-08 00:15:34.242829 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:34.242908 | orchestrator | 2025-09-08 00:15:34.242924 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-08 00:15:35.096625 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:35.096783 | orchestrator | 2025-09-08 00:15:35.096799 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-08 00:17:05.890098 | orchestrator | changed: [testbed-manager] 2025-09-08 00:17:05.890232 | orchestrator | 2025-09-08 00:17:05.890248 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-08 00:17:06.931886 | orchestrator | ok: [testbed-manager] 2025-09-08 00:17:06.932012 | orchestrator | 2025-09-08 00:17:06.932029 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-08 00:17:06.991638 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:17:06.991739 | orchestrator | 2025-09-08 00:17:06.991755 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-08 00:17:09.503390 | orchestrator | changed: [testbed-manager] 2025-09-08 00:17:09.503502 | orchestrator | 2025-09-08 00:17:09.503518 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-08 00:17:09.568015 | orchestrator | ok: [testbed-manager] 2025-09-08 00:17:09.568092 | orchestrator | 2025-09-08 00:17:09.568106 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-08 00:17:09.568118 | orchestrator | 2025-09-08 00:17:09.568130 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-08 00:17:09.621243 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:17:09.621291 | orchestrator | 2025-09-08 00:17:09.621304 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-08 00:18:09.676508 | orchestrator | Pausing for 60 seconds 2025-09-08 00:18:09.676708 | orchestrator | changed: [testbed-manager] 2025-09-08 00:18:09.676727 | orchestrator | 2025-09-08 00:18:09.676741 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-08 00:18:13.799938 | orchestrator | changed: [testbed-manager] 2025-09-08 00:18:13.800067 | orchestrator | 2025-09-08 00:18:13.800085 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-08 00:18:55.449404 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-08 00:18:55.449527 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-08 00:18:55.449543 | orchestrator | changed: [testbed-manager] 2025-09-08 00:18:55.449584 | orchestrator | 2025-09-08 00:18:55.449596 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-08 00:19:05.554960 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:05.555079 | orchestrator | 2025-09-08 00:19:05.555096 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-08 00:19:05.657828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-08 00:19:05.657927 | orchestrator | 2025-09-08 00:19:05.657945 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-08 00:19:05.657958 | orchestrator | 2025-09-08 00:19:05.657970 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-08 00:19:05.706111 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:05.706161 | orchestrator | 2025-09-08 00:19:05.706173 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:19:05.706186 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-08 00:19:05.706197 | orchestrator | 2025-09-08 00:19:05.807785 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-08 00:19:05.807855 | orchestrator | + deactivate 2025-09-08 00:19:05.807869 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-08 00:19:05.807883 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:19:05.807894 | orchestrator | + export PATH 2025-09-08 00:19:05.807906 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-08 00:19:05.807917 | orchestrator | + '[' -n '' ']' 2025-09-08 00:19:05.807928 | orchestrator | + hash -r 2025-09-08 00:19:05.807963 | orchestrator | + '[' -n '' ']' 2025-09-08 00:19:05.807974 | orchestrator | + unset VIRTUAL_ENV 2025-09-08 00:19:05.807986 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-08 00:19:05.807997 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-08 00:19:05.808008 | orchestrator | + unset -f deactivate 2025-09-08 00:19:05.808020 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-08 00:19:05.815561 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-08 00:19:05.815586 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-08 00:19:05.815597 | orchestrator | + local max_attempts=60 2025-09-08 00:19:05.815608 | orchestrator | + local name=ceph-ansible 2025-09-08 00:19:05.815619 | orchestrator | + local attempt_num=1 2025-09-08 00:19:05.816588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:19:05.857176 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:19:05.857207 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-08 00:19:05.857218 | orchestrator | + local max_attempts=60 2025-09-08 00:19:05.857230 | orchestrator | + local name=kolla-ansible 2025-09-08 00:19:05.857241 | orchestrator | + local attempt_num=1 2025-09-08 00:19:05.858491 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-08 00:19:05.901478 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:19:05.901513 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-08 00:19:05.901524 | orchestrator | + local max_attempts=60 2025-09-08 00:19:05.901535 | orchestrator | + local name=osism-ansible 2025-09-08 00:19:05.901547 | orchestrator | + local attempt_num=1 2025-09-08 00:19:05.902779 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-08 00:19:05.937207 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:19:05.937245 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-08 00:19:05.937257 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-08 00:19:06.666836 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-08 00:19:06.898447 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-08 00:19:06.898552 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898568 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898604 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-08 00:19:06.898617 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-08 00:19:06.898643 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898701 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898721 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-09-08 00:19:06.898738 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898749 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-08 00:19:06.898760 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898771 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-08 00:19:06.898782 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898793 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-08 00:19:06.898805 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.898816 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-08 00:19:06.904770 | orchestrator | ++ semver latest 7.0.0 2025-09-08 00:19:06.955286 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-08 00:19:06.955348 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-08 00:19:06.955362 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-08 00:19:06.959255 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-08 00:19:19.289346 | orchestrator | 2025-09-08 00:19:19 | INFO  | Task 7d65e2f8-cc51-4ffe-abc0-b92b184f8b83 (resolvconf) was prepared for execution. 2025-09-08 00:19:19.289471 | orchestrator | 2025-09-08 00:19:19 | INFO  | It takes a moment until task 7d65e2f8-cc51-4ffe-abc0-b92b184f8b83 (resolvconf) has been started and output is visible here. 2025-09-08 00:19:32.960357 | orchestrator | 2025-09-08 00:19:32.960477 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-08 00:19:32.960494 | orchestrator | 2025-09-08 00:19:32.960506 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:19:32.960542 | orchestrator | Monday 08 September 2025 00:19:23 +0000 (0:00:00.154) 0:00:00.154 ****** 2025-09-08 00:19:32.960554 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:32.960568 | orchestrator | 2025-09-08 00:19:32.960579 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-08 00:19:32.960591 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:03.849) 0:00:04.003 ****** 2025-09-08 00:19:32.960602 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:32.960614 | orchestrator | 2025-09-08 00:19:32.960625 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-08 00:19:32.960636 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:00.058) 0:00:04.061 ****** 2025-09-08 00:19:32.960695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-08 00:19:32.960709 | orchestrator | 2025-09-08 00:19:32.960720 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-08 00:19:32.960731 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:00.088) 0:00:04.150 ****** 2025-09-08 00:19:32.960742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:19:32.960753 | orchestrator | 2025-09-08 00:19:32.960764 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-08 00:19:32.960775 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:00.067) 0:00:04.218 ****** 2025-09-08 00:19:32.960786 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:32.960797 | orchestrator | 2025-09-08 00:19:32.960808 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-08 00:19:32.960819 | orchestrator | Monday 08 September 2025 00:19:28 +0000 (0:00:01.099) 0:00:05.318 ****** 2025-09-08 00:19:32.960830 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:32.960841 | orchestrator | 2025-09-08 00:19:32.960852 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-08 00:19:32.960865 | orchestrator | Monday 08 September 2025 00:19:28 +0000 (0:00:00.062) 0:00:05.381 ****** 2025-09-08 00:19:32.960877 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:32.960890 | orchestrator | 2025-09-08 00:19:32.960903 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-08 00:19:32.960916 | orchestrator | Monday 08 September 2025 00:19:28 +0000 (0:00:00.492) 0:00:05.874 ****** 2025-09-08 00:19:32.960928 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:32.960941 | orchestrator | 2025-09-08 00:19:32.960953 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-08 00:19:32.960967 | orchestrator | Monday 08 September 2025 00:19:28 +0000 (0:00:00.082) 0:00:05.956 ****** 2025-09-08 00:19:32.960980 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:32.960992 | orchestrator | 2025-09-08 00:19:32.961004 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-08 00:19:32.961017 | orchestrator | Monday 08 September 2025 00:19:29 +0000 (0:00:00.524) 0:00:06.481 ****** 2025-09-08 00:19:32.961029 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:32.961042 | orchestrator | 2025-09-08 00:19:32.961054 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-08 00:19:32.961067 | orchestrator | Monday 08 September 2025 00:19:30 +0000 (0:00:01.047) 0:00:07.529 ****** 2025-09-08 00:19:32.961080 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:32.961093 | orchestrator | 2025-09-08 00:19:32.961105 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-08 00:19:32.961118 | orchestrator | Monday 08 September 2025 00:19:31 +0000 (0:00:00.957) 0:00:08.487 ****** 2025-09-08 00:19:32.961143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-08 00:19:32.961164 | orchestrator | 2025-09-08 00:19:32.961177 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-08 00:19:32.961191 | orchestrator | Monday 08 September 2025 00:19:31 +0000 (0:00:00.081) 0:00:08.568 ****** 2025-09-08 00:19:32.961204 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:32.961216 | orchestrator | 2025-09-08 00:19:32.961227 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:19:32.961239 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:19:32.961250 | orchestrator | 2025-09-08 00:19:32.961261 | orchestrator | 2025-09-08 00:19:32.961272 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:19:32.961283 | orchestrator | Monday 08 September 2025 00:19:32 +0000 (0:00:01.142) 0:00:09.710 ****** 2025-09-08 00:19:32.961294 | orchestrator | =============================================================================== 2025-09-08 00:19:32.961305 | orchestrator | Gathering Facts --------------------------------------------------------- 3.85s 2025-09-08 00:19:32.961316 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2025-09-08 00:19:32.961326 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2025-09-08 00:19:32.961337 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-09-08 00:19:32.961348 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-09-08 00:19:32.961359 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-08 00:19:32.961388 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-09-08 00:19:32.961400 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-08 00:19:32.961411 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-08 00:19:32.961421 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-09-08 00:19:32.961433 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-08 00:19:32.961443 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-08 00:19:32.961454 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-08 00:19:33.249346 | orchestrator | + osism apply sshconfig 2025-09-08 00:19:45.302740 | orchestrator | 2025-09-08 00:19:45 | INFO  | Task 4e736297-766b-425a-a111-5c04b955b3f7 (sshconfig) was prepared for execution. 2025-09-08 00:19:45.302886 | orchestrator | 2025-09-08 00:19:45 | INFO  | It takes a moment until task 4e736297-766b-425a-a111-5c04b955b3f7 (sshconfig) has been started and output is visible here. 2025-09-08 00:19:57.141094 | orchestrator | 2025-09-08 00:19:57.141241 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-08 00:19:57.141258 | orchestrator | 2025-09-08 00:19:57.141270 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-08 00:19:57.141282 | orchestrator | Monday 08 September 2025 00:19:49 +0000 (0:00:00.163) 0:00:00.163 ****** 2025-09-08 00:19:57.141294 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:57.141306 | orchestrator | 2025-09-08 00:19:57.141318 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-08 00:19:57.141329 | orchestrator | Monday 08 September 2025 00:19:49 +0000 (0:00:00.602) 0:00:00.766 ****** 2025-09-08 00:19:57.141340 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:57.141352 | orchestrator | 2025-09-08 00:19:57.141363 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-08 00:19:57.141375 | orchestrator | Monday 08 September 2025 00:19:50 +0000 (0:00:00.514) 0:00:01.281 ****** 2025-09-08 00:19:57.141387 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-08 00:19:57.141398 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-08 00:19:57.141442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-08 00:19:57.141454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-08 00:19:57.141465 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-08 00:19:57.141497 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-08 00:19:57.141508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-08 00:19:57.141519 | orchestrator | 2025-09-08 00:19:57.141530 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-08 00:19:57.141541 | orchestrator | Monday 08 September 2025 00:19:56 +0000 (0:00:05.864) 0:00:07.145 ****** 2025-09-08 00:19:57.141551 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:57.141562 | orchestrator | 2025-09-08 00:19:57.141573 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-08 00:19:57.141584 | orchestrator | Monday 08 September 2025 00:19:56 +0000 (0:00:00.062) 0:00:07.208 ****** 2025-09-08 00:19:57.141594 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:57.141605 | orchestrator | 2025-09-08 00:19:57.141618 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:19:57.141632 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:19:57.141679 | orchestrator | 2025-09-08 00:19:57.141693 | orchestrator | 2025-09-08 00:19:57.141706 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:19:57.141719 | orchestrator | Monday 08 September 2025 00:19:56 +0000 (0:00:00.608) 0:00:07.816 ****** 2025-09-08 00:19:57.141732 | orchestrator | =============================================================================== 2025-09-08 00:19:57.141745 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.86s 2025-09-08 00:19:57.141758 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-09-08 00:19:57.141770 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-09-08 00:19:57.141784 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-09-08 00:19:57.141797 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-08 00:19:57.419297 | orchestrator | + osism apply known-hosts 2025-09-08 00:20:09.496521 | orchestrator | 2025-09-08 00:20:09 | INFO  | Task ee51e9c5-43d8-4e05-b7c5-c6415e72e4fb (known-hosts) was prepared for execution. 2025-09-08 00:20:09.496617 | orchestrator | 2025-09-08 00:20:09 | INFO  | It takes a moment until task ee51e9c5-43d8-4e05-b7c5-c6415e72e4fb (known-hosts) has been started and output is visible here. 2025-09-08 00:20:26.582375 | orchestrator | 2025-09-08 00:20:26.582508 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-08 00:20:26.582525 | orchestrator | 2025-09-08 00:20:26.582537 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-08 00:20:26.582549 | orchestrator | Monday 08 September 2025 00:20:13 +0000 (0:00:00.217) 0:00:00.217 ****** 2025-09-08 00:20:26.582560 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-08 00:20:26.582571 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-08 00:20:26.582582 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-08 00:20:26.582592 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-08 00:20:26.582601 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-08 00:20:26.582611 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-08 00:20:26.582621 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-08 00:20:26.582630 | orchestrator | 2025-09-08 00:20:26.582640 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-08 00:20:26.582695 | orchestrator | Monday 08 September 2025 00:20:19 +0000 (0:00:06.183) 0:00:06.401 ****** 2025-09-08 00:20:26.582733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-08 00:20:26.582747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-08 00:20:26.582757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-08 00:20:26.582767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-08 00:20:26.582777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-08 00:20:26.582799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-08 00:20:26.582810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-08 00:20:26.582820 | orchestrator | 2025-09-08 00:20:26.582830 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.582841 | orchestrator | Monday 08 September 2025 00:20:19 +0000 (0:00:00.164) 0:00:06.565 ****** 2025-09-08 00:20:26.582851 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKA6ARQ6Q0O0hZKXwEZ81B3qz4/J3Wzw877++wMvssUBTW/5bLGO0nGUpc3J2Zmqpa5cx0Y9NsUF8eufbmfc42w=) 2025-09-08 00:20:26.582868 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeQHYXP8iMMrpT4adYdrGexpWIAr4YUeUmVQiDTPrOeLu9IdJz3VgCgDSS0SchKAbq6mjDWQ6gAXRe0+mDoYNBYu9cDMKuvTHzkh3hazTIWdbSVxO/6Scduaqef2fkcSSvzW54hjdnocMwbWYuuVU/C+t/kntg7TLopOvFU5476RtVXNETY2jwl3CvU4MLwofI8Ld6sRaMr2uqYWg+AtUcXI4CHn1zYHV86FfTDGjIbTS18ibDvhK/y2NpRPrGzFF5wNHSPOzoFWqewX5cki/Hqn3ntQ8yce04rctPYXRP5paX001W9n9YXCxwmJkJIa+wXXLJ32Gqj00kXffYtNRUrH5QC7vbJtufW3QUhsb3jNxxBjla9kqPtS2lxO9pwDUZ8qrdHE+zB8/5wEVikwHsTNjb8dUhkE9BQ6ftau7VxsV0ukrKkVASJz7RgHHDq7p2aW5Ur6XW+M8r2Oo6zAXHIEJdKADGeGSXp3pUZUzz+SgwLQBVqIJHEaauyX/CavE=) 2025-09-08 00:20:26.582884 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMHTPPTC5ni+rwStn6LOqpTPqfinpXBNh+nQ0zemPen5) 2025-09-08 00:20:26.582897 | orchestrator | 2025-09-08 00:20:26.582909 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.582921 | orchestrator | Monday 08 September 2025 00:20:21 +0000 (0:00:01.222) 0:00:07.788 ****** 2025-09-08 00:20:26.582932 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAQ5kKchjcHFxZvRR0oAL+AbTiwrCAKh+Y2wy4QSYrL+DQ8qZkoIot/PIq3o1YHvZsho/2CrwTwkb0YTDsJiaIw=) 2025-09-08 00:20:26.582944 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMk0Bg2JlkBcVQlUuBiq/rstc9uTWKw2ZZRID6L8+f+t) 2025-09-08 00:20:26.582984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNtw8anAe/qrABG2+EYZ6ntKknaLV5P22P0RCSOFkNpswypSiE+WhqNid+jNjT14dWiT7GtlyeJl625Ffya2Y37CRgbFp+gx03gB+lMTJIZqfzeWSdz0cY6wDRsxcWTud0NaKqr8Q7NWnHtUu5L/V5QRsrBHx+Apfhe4Cxe6FYx6ZFTEsk8FLwzssyeZ4scNTx1pKlCskHrZQIo8VzuLb2A3C8wSKGWQjk+LOOEE+ZSaUEY6AY9/YgRkHkPlJeeXrPw53vlHXA5cC9U0uBP+KvEJF2/PGA/ls0tcwiBfzLrqw0vZeqQtrbAPUMZHGVwPc2qflLqk29vIvEnGb7PK6+ttksOBNEpbuL4wgE3cmF3Rka5PdNSnf4BcjeRVmKKFkLyHWqT4aGFNlvJhNUPcVNYk1lPsr5i1gtxGtHPQEnhbDf42X6u7i4QBLPVoW8+0N2+YIBfWueyxGQYOdzOyGT5inp3B6+nHu51OkLeDsSMM1+zaQypzyqmw+aHmntTIk=) 2025-09-08 00:20:26.583007 | orchestrator | 2025-09-08 00:20:26.583019 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.583031 | orchestrator | Monday 08 September 2025 00:20:22 +0000 (0:00:01.119) 0:00:08.907 ****** 2025-09-08 00:20:26.583043 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMX3Ksv5zd8J3nJMO9xuc2IKd+N7f/WV60AwG7xZB0VgMFZXqHwS/Z265n7ZOoJS20blfdnmGc9uECTB8FpQWI8=) 2025-09-08 00:20:26.583055 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoW0PRUuv5oJ0ACGuHsNhdPtk54nP0ilF9DlGGpOw79YtIxL48niI8tZS4qFC8xHg7yGFTwPJDN/XuowEISfGZQ/wRYdHngjOTRXoLVRvz/RMqxXYp7ipSTwMji8lqEBfGgGsBLshkbbd3eT/AlK1NYYpev/XuHruCRHSzElxhGNYHPUYocLmH0BmYThN9P59mgywrGAGmANyy/YN/aLOsHQSC4IVkMI/CijXbZbsA8EaDa1CEs2xwYgk2Ul3vPENkhP1gk0/HVKuwylMYPwPLqa4O1SNVMWUd9olU5WvXoUoCNRAjaG7oztCT/p+nwe+pKAJssE1rkuuWJx5hYQpRUfV+S/V0R6fOzZGQWjBDe/Sv0W4x3V41SvYbdrzmfb9QUvD2seN656FD2GxWBnejZFxHtoe45iAKt4wue+KhqwjtYD/fiPYOszaCt86HINPyNygra11QIxPpePYi+hc6d/Vha+UEIMpEQZcvROSo8AYq08HiuFwOFJsku1AOzBc=) 2025-09-08 00:20:26.583067 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFu2QXOJxJG9CcFkUbKIvAEjb0MY6nGEbm+Mtp00LRXO) 2025-09-08 00:20:26.583078 | orchestrator | 2025-09-08 00:20:26.583089 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.583100 | orchestrator | Monday 08 September 2025 00:20:23 +0000 (0:00:01.085) 0:00:09.992 ****** 2025-09-08 00:20:26.583176 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/LDt7pgxwZeZOX7w3oskZYhcI1GE71TQCiJLzDj1JWHYk9eFqaBrDFWgkXz/eopyVoxBFn5LncN0te5G2dFh+4Aa9LfGv2jtrsWp8yQPen+qppl8d0y070Y/5xbOrEO/rk1AAWhMLo+2pgJEbR9DZbR6bg63AdzEHdtcVxCIPp3DOWhDhx5JY83GdlR7fA3dPiZcX6OQiwNd4Eeh0dqyRO8ac4v3U4Hpl8anFJFU9RhbXVxWwrok/zH/3J44mFOredfwhuMGeonVVicJK8gJzpcVkO2ZDeBHfZYOMmM6RwHJyBayHs+o/c/csfMRZHPZBlqVXhCIMKgZC6OcpVUACDmBGhpQluDnFevYIfG+QOgFAejXc40gtnWZFUhwHrkX2R90eL9VGHwD4wZQ11dx/oYtTGzq7ACnLJ9LDUCpvdLZQCv7Xz+L/yJF3thpZ5ewrda0TzhYoYGhhsIqyUsW5+3sEShuZvA6ZuTM4vbMCXURS+/D+vG1/nLzpucC77gU=) 2025-09-08 00:20:26.583189 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOZdQENFiv76ZcE8gNQly76wQNbid/RUkAbg9vOMlLkm) 2025-09-08 00:20:26.583201 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNPUbMnHJEONnUWqx++NpfEzpFgjX+e4hRAQpjC7uHP2QxEGXKwgFY0lbd658+xp1pK5rIxeaJrwmVRVppNbHio=) 2025-09-08 00:20:26.583213 | orchestrator | 2025-09-08 00:20:26.583224 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.583233 | orchestrator | Monday 08 September 2025 00:20:24 +0000 (0:00:01.078) 0:00:11.071 ****** 2025-09-08 00:20:26.583243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMru4mi+HJbduP9QxA8kMyLaypoD0U7DayGWkpVsDM/9KfVKk8qUmiGsGGeKlk9OrQ4B3kSMV2u+Y6ek5Hg8GEw=) 2025-09-08 00:20:26.583253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0yRrAFl1TacbknUapeCPIiXl+spBqYzlVNsIQLUfnXkLc2opUihrnJQRMD190coaKGC7fo5xtbsQlvKvdS6fvvbctlc7Ck8+5dZZbrEHofInOV/it2F7hWpAo5ftFXhT2uVa0Qw4PD/BFJp6Yx60/y+dunbKYU6ttpx86mBKIOBbKI+F6hQLZsr/Guusfs2e6rS5LC4ViG6xJHArxaMAp3hn/RiENCc4vd/Cx08kTSorg+wGcJjunbOuVnzcT3PMR/Mjj4/lJPsR7FhnBNFGazioH2MT/rt2lC5J3fSjxiRnUt2s9FAkqmhIUiLz173oSGvQZTeJKn5WYY0F8NK5S4oQRPzr1XBWjzZTalesRMsYWFOE/PLrYYsNcZXv4bnKm8hNyg1n1B/a3IWTGmJAE2BIMeYPhfmW631PPdmG9aG7E6+Z7TzEWcoDfjZ6vf/MjFVIBhXbDS7Czj1uOJYHLVBwQuZSSjXbGJBE5o6wBDhOXpI1DDUaaXH/9g+o7znc=) 2025-09-08 00:20:26.583264 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEfNCJMVOgTgezMt80ox1R7XUmmtOEsbrfdPFU28BeqG) 2025-09-08 00:20:26.583281 | orchestrator | 2025-09-08 00:20:26.583291 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.583301 | orchestrator | Monday 08 September 2025 00:20:25 +0000 (0:00:01.071) 0:00:12.143 ****** 2025-09-08 00:20:26.583317 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO+xfX4uhKEfOZI8s2CRrwC474UblPcqJsPL4K3CWP8h5ATIoX/zkSe+HcSTt8+S+QeopCsr2/fHY/0dtbIFsFA=) 2025-09-08 00:20:37.600699 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGqYp6kQbKVs0x7QD5QibyWAuQEGWy2QpdZbZ3t2+pGT) 2025-09-08 00:20:37.600837 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyfPJGP8jPnzFE5VmpfJRER0SIIQT5lPVp7QtuZzdIyo1d4CrFC+cZXFazqZ2rqz4o6UpN2BmGrO8ymv66YpuqQ4TOf6yjWtzJxv3FhacvyUyQ6jc9l0y9AMDxipvjDWDr9ENBsP8WSYhgCaF+z06hkO+zCiQPwtLfjcBqfv1xAY80WOk/AYTqzxE7xYe66t2j/08ho04FYAZ2CPI+6yawLVdc3gVLUxTGkscsOrqyjyR+KNcOiB2ykX/NTosqJoB5mzwWJsULu59HH4FZSp4HnZDHMPOOkWSEmfzhJIFWjKcRVnGG5TryO6AiqzQ+gU9ovwL4W42eQ1fLVG9SOGHGpjlIK9voM00yb3JlNmuzApnudL2rpUsrTTkVaodGslJ7jjTtbyhHMwrq0sF6q2IVLTjclteC4o7PY7+e0Mf1omtWxB9G4UOOiKYfD5g9q0yPNQPhGRuFJZxWQVwcRluPiTprZVA0POKkJqCwaSC9X7pzcCbWBJSEyxfe1SkGOWc=) 2025-09-08 00:20:37.600857 | orchestrator | 2025-09-08 00:20:37.600869 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:37.600882 | orchestrator | Monday 08 September 2025 00:20:26 +0000 (0:00:01.111) 0:00:13.254 ****** 2025-09-08 00:20:37.600892 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHbj6FYbMkbZGZ3RWeFVpRbjCHs/q2dqxx0vhrJeRB9bnkGEKpm6BH6AdfwA2jX8s0MYtOpbkUFsR9Ow6K5UW1g=) 2025-09-08 00:20:37.600904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICSegH7C07c8H06I2vGLOlRfnjLY2ceF1PdZhEviCI3k) 2025-09-08 00:20:37.600914 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfH1SwwsJYIoiFKg+56MlyJAGPgUc30nWpenb+s7e2TfoPijU0lBOMvFkgavJjE4p5m8qwUOFdq29rppWKZYIGO/HnZeTfTEvJYSrX+oDd5G+N2/fraB6RxUBAwrMApDXIjiLiF26B5XU4Drntls1FmZ/TTIW9BJM10+RilRZjMU1SNYWhxY/xYgJevF6CCROJQa1Lk2cmlsuuchnBWW4dxhNWq9ugvN5wL06QO9ONeBzuCWIHqfHGLaDyuOsOymm4eOR9yTbjHefo2wRrvm7hJ7noktifXbCKAgcZUR16p6jGWLdvug7PB5nsbEKjRPMpOSswksHr/QnOJcljYHeG0spPBdhMSjdOEQGz1dHER1YvVGyxfLXmg7NXpEXtkgUD1pb7Q2Z2x/79bHJt8EuZ9+fxbkQV1IQXmZb/wfYCRrGSV+Q0b5b7aY25OIMQzGN8tg3INKzVVnnxGD/1R3IYBCGX2R0XU71q3kGX35XSc2K3/iDJNom55fAmUpBfvDk=) 2025-09-08 00:20:37.600925 | orchestrator | 2025-09-08 00:20:37.600935 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-08 00:20:37.600947 | orchestrator | Monday 08 September 2025 00:20:27 +0000 (0:00:01.081) 0:00:14.335 ****** 2025-09-08 00:20:37.600957 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-08 00:20:37.600968 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-08 00:20:37.600977 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-08 00:20:37.600987 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-08 00:20:37.600996 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-08 00:20:37.601006 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-08 00:20:37.601016 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-08 00:20:37.601025 | orchestrator | 2025-09-08 00:20:37.601035 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-08 00:20:37.601069 | orchestrator | Monday 08 September 2025 00:20:32 +0000 (0:00:05.328) 0:00:19.664 ****** 2025-09-08 00:20:37.601081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-08 00:20:37.601093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-08 00:20:37.601128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-08 00:20:37.601138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-08 00:20:37.601148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-08 00:20:37.601158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-08 00:20:37.601167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-08 00:20:37.601177 | orchestrator | 2025-09-08 00:20:37.601208 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:37.601220 | orchestrator | Monday 08 September 2025 00:20:33 +0000 (0:00:00.176) 0:00:19.840 ****** 2025-09-08 00:20:37.601232 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMHTPPTC5ni+rwStn6LOqpTPqfinpXBNh+nQ0zemPen5) 2025-09-08 00:20:37.601247 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeQHYXP8iMMrpT4adYdrGexpWIAr4YUeUmVQiDTPrOeLu9IdJz3VgCgDSS0SchKAbq6mjDWQ6gAXRe0+mDoYNBYu9cDMKuvTHzkh3hazTIWdbSVxO/6Scduaqef2fkcSSvzW54hjdnocMwbWYuuVU/C+t/kntg7TLopOvFU5476RtVXNETY2jwl3CvU4MLwofI8Ld6sRaMr2uqYWg+AtUcXI4CHn1zYHV86FfTDGjIbTS18ibDvhK/y2NpRPrGzFF5wNHSPOzoFWqewX5cki/Hqn3ntQ8yce04rctPYXRP5paX001W9n9YXCxwmJkJIa+wXXLJ32Gqj00kXffYtNRUrH5QC7vbJtufW3QUhsb3jNxxBjla9kqPtS2lxO9pwDUZ8qrdHE+zB8/5wEVikwHsTNjb8dUhkE9BQ6ftau7VxsV0ukrKkVASJz7RgHHDq7p2aW5Ur6XW+M8r2Oo6zAXHIEJdKADGeGSXp3pUZUzz+SgwLQBVqIJHEaauyX/CavE=) 2025-09-08 00:20:37.601260 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKA6ARQ6Q0O0hZKXwEZ81B3qz4/J3Wzw877++wMvssUBTW/5bLGO0nGUpc3J2Zmqpa5cx0Y9NsUF8eufbmfc42w=) 2025-09-08 00:20:37.601272 | orchestrator | 2025-09-08 00:20:37.601283 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:37.601295 | orchestrator | Monday 08 September 2025 00:20:34 +0000 (0:00:01.153) 0:00:20.993 ****** 2025-09-08 00:20:37.601306 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMk0Bg2JlkBcVQlUuBiq/rstc9uTWKw2ZZRID6L8+f+t) 2025-09-08 00:20:37.601319 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNtw8anAe/qrABG2+EYZ6ntKknaLV5P22P0RCSOFkNpswypSiE+WhqNid+jNjT14dWiT7GtlyeJl625Ffya2Y37CRgbFp+gx03gB+lMTJIZqfzeWSdz0cY6wDRsxcWTud0NaKqr8Q7NWnHtUu5L/V5QRsrBHx+Apfhe4Cxe6FYx6ZFTEsk8FLwzssyeZ4scNTx1pKlCskHrZQIo8VzuLb2A3C8wSKGWQjk+LOOEE+ZSaUEY6AY9/YgRkHkPlJeeXrPw53vlHXA5cC9U0uBP+KvEJF2/PGA/ls0tcwiBfzLrqw0vZeqQtrbAPUMZHGVwPc2qflLqk29vIvEnGb7PK6+ttksOBNEpbuL4wgE3cmF3Rka5PdNSnf4BcjeRVmKKFkLyHWqT4aGFNlvJhNUPcVNYk1lPsr5i1gtxGtHPQEnhbDf42X6u7i4QBLPVoW8+0N2+YIBfWueyxGQYOdzOyGT5inp3B6+nHu51OkLeDsSMM1+zaQypzyqmw+aHmntTIk=) 2025-09-08 00:20:37.601331 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAQ5kKchjcHFxZvRR0oAL+AbTiwrCAKh+Y2wy4QSYrL+DQ8qZkoIot/PIq3o1YHvZsho/2CrwTwkb0YTDsJiaIw=) 2025-09-08 00:20:37.601342 | orchestrator | 2025-09-08 00:20:37.601353 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:37.601365 | orchestrator | Monday 08 September 2025 00:20:35 +0000 (0:00:01.100) 0:00:22.094 ****** 2025-09-08 00:20:37.601385 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoW0PRUuv5oJ0ACGuHsNhdPtk54nP0ilF9DlGGpOw79YtIxL48niI8tZS4qFC8xHg7yGFTwPJDN/XuowEISfGZQ/wRYdHngjOTRXoLVRvz/RMqxXYp7ipSTwMji8lqEBfGgGsBLshkbbd3eT/AlK1NYYpev/XuHruCRHSzElxhGNYHPUYocLmH0BmYThN9P59mgywrGAGmANyy/YN/aLOsHQSC4IVkMI/CijXbZbsA8EaDa1CEs2xwYgk2Ul3vPENkhP1gk0/HVKuwylMYPwPLqa4O1SNVMWUd9olU5WvXoUoCNRAjaG7oztCT/p+nwe+pKAJssE1rkuuWJx5hYQpRUfV+S/V0R6fOzZGQWjBDe/Sv0W4x3V41SvYbdrzmfb9QUvD2seN656FD2GxWBnejZFxHtoe45iAKt4wue+KhqwjtYD/fiPYOszaCt86HINPyNygra11QIxPpePYi+hc6d/Vha+UEIMpEQZcvROSo8AYq08HiuFwOFJsku1AOzBc=) 2025-09-08 00:20:37.601397 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMX3Ksv5zd8J3nJMO9xuc2IKd+N7f/WV60AwG7xZB0VgMFZXqHwS/Z265n7ZOoJS20blfdnmGc9uECTB8FpQWI8=) 2025-09-08 00:20:37.601409 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFu2QXOJxJG9CcFkUbKIvAEjb0MY6nGEbm+Mtp00LRXO) 2025-09-08 00:20:37.601420 | orchestrator | 2025-09-08 00:20:37.601431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:37.601442 | orchestrator | Monday 08 September 2025 00:20:36 +0000 (0:00:01.112) 0:00:23.206 ****** 2025-09-08 00:20:37.601454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOZdQENFiv76ZcE8gNQly76wQNbid/RUkAbg9vOMlLkm) 2025-09-08 00:20:37.601490 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/LDt7pgxwZeZOX7w3oskZYhcI1GE71TQCiJLzDj1JWHYk9eFqaBrDFWgkXz/eopyVoxBFn5LncN0te5G2dFh+4Aa9LfGv2jtrsWp8yQPen+qppl8d0y070Y/5xbOrEO/rk1AAWhMLo+2pgJEbR9DZbR6bg63AdzEHdtcVxCIPp3DOWhDhx5JY83GdlR7fA3dPiZcX6OQiwNd4Eeh0dqyRO8ac4v3U4Hpl8anFJFU9RhbXVxWwrok/zH/3J44mFOredfwhuMGeonVVicJK8gJzpcVkO2ZDeBHfZYOMmM6RwHJyBayHs+o/c/csfMRZHPZBlqVXhCIMKgZC6OcpVUACDmBGhpQluDnFevYIfG+QOgFAejXc40gtnWZFUhwHrkX2R90eL9VGHwD4wZQ11dx/oYtTGzq7ACnLJ9LDUCpvdLZQCv7Xz+L/yJF3thpZ5ewrda0TzhYoYGhhsIqyUsW5+3sEShuZvA6ZuTM4vbMCXURS+/D+vG1/nLzpucC77gU=) 2025-09-08 00:20:41.906781 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNPUbMnHJEONnUWqx++NpfEzpFgjX+e4hRAQpjC7uHP2QxEGXKwgFY0lbd658+xp1pK5rIxeaJrwmVRVppNbHio=) 2025-09-08 00:20:41.906897 | orchestrator | 2025-09-08 00:20:41.906912 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:41.906924 | orchestrator | Monday 08 September 2025 00:20:37 +0000 (0:00:01.067) 0:00:24.274 ****** 2025-09-08 00:20:41.906934 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMru4mi+HJbduP9QxA8kMyLaypoD0U7DayGWkpVsDM/9KfVKk8qUmiGsGGeKlk9OrQ4B3kSMV2u+Y6ek5Hg8GEw=) 2025-09-08 00:20:41.906947 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0yRrAFl1TacbknUapeCPIiXl+spBqYzlVNsIQLUfnXkLc2opUihrnJQRMD190coaKGC7fo5xtbsQlvKvdS6fvvbctlc7Ck8+5dZZbrEHofInOV/it2F7hWpAo5ftFXhT2uVa0Qw4PD/BFJp6Yx60/y+dunbKYU6ttpx86mBKIOBbKI+F6hQLZsr/Guusfs2e6rS5LC4ViG6xJHArxaMAp3hn/RiENCc4vd/Cx08kTSorg+wGcJjunbOuVnzcT3PMR/Mjj4/lJPsR7FhnBNFGazioH2MT/rt2lC5J3fSjxiRnUt2s9FAkqmhIUiLz173oSGvQZTeJKn5WYY0F8NK5S4oQRPzr1XBWjzZTalesRMsYWFOE/PLrYYsNcZXv4bnKm8hNyg1n1B/a3IWTGmJAE2BIMeYPhfmW631PPdmG9aG7E6+Z7TzEWcoDfjZ6vf/MjFVIBhXbDS7Czj1uOJYHLVBwQuZSSjXbGJBE5o6wBDhOXpI1DDUaaXH/9g+o7znc=) 2025-09-08 00:20:41.906960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEfNCJMVOgTgezMt80ox1R7XUmmtOEsbrfdPFU28BeqG) 2025-09-08 00:20:41.906972 | orchestrator | 2025-09-08 00:20:41.906982 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:41.906992 | orchestrator | Monday 08 September 2025 00:20:38 +0000 (0:00:01.062) 0:00:25.336 ****** 2025-09-08 00:20:41.907003 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyfPJGP8jPnzFE5VmpfJRER0SIIQT5lPVp7QtuZzdIyo1d4CrFC+cZXFazqZ2rqz4o6UpN2BmGrO8ymv66YpuqQ4TOf6yjWtzJxv3FhacvyUyQ6jc9l0y9AMDxipvjDWDr9ENBsP8WSYhgCaF+z06hkO+zCiQPwtLfjcBqfv1xAY80WOk/AYTqzxE7xYe66t2j/08ho04FYAZ2CPI+6yawLVdc3gVLUxTGkscsOrqyjyR+KNcOiB2ykX/NTosqJoB5mzwWJsULu59HH4FZSp4HnZDHMPOOkWSEmfzhJIFWjKcRVnGG5TryO6AiqzQ+gU9ovwL4W42eQ1fLVG9SOGHGpjlIK9voM00yb3JlNmuzApnudL2rpUsrTTkVaodGslJ7jjTtbyhHMwrq0sF6q2IVLTjclteC4o7PY7+e0Mf1omtWxB9G4UOOiKYfD5g9q0yPNQPhGRuFJZxWQVwcRluPiTprZVA0POKkJqCwaSC9X7pzcCbWBJSEyxfe1SkGOWc=) 2025-09-08 00:20:41.907039 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO+xfX4uhKEfOZI8s2CRrwC474UblPcqJsPL4K3CWP8h5ATIoX/zkSe+HcSTt8+S+QeopCsr2/fHY/0dtbIFsFA=) 2025-09-08 00:20:41.907050 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGqYp6kQbKVs0x7QD5QibyWAuQEGWy2QpdZbZ3t2+pGT) 2025-09-08 00:20:41.907060 | orchestrator | 2025-09-08 00:20:41.907070 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:41.907080 | orchestrator | Monday 08 September 2025 00:20:39 +0000 (0:00:01.067) 0:00:26.404 ****** 2025-09-08 00:20:41.907089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHbj6FYbMkbZGZ3RWeFVpRbjCHs/q2dqxx0vhrJeRB9bnkGEKpm6BH6AdfwA2jX8s0MYtOpbkUFsR9Ow6K5UW1g=) 2025-09-08 00:20:41.907100 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfH1SwwsJYIoiFKg+56MlyJAGPgUc30nWpenb+s7e2TfoPijU0lBOMvFkgavJjE4p5m8qwUOFdq29rppWKZYIGO/HnZeTfTEvJYSrX+oDd5G+N2/fraB6RxUBAwrMApDXIjiLiF26B5XU4Drntls1FmZ/TTIW9BJM10+RilRZjMU1SNYWhxY/xYgJevF6CCROJQa1Lk2cmlsuuchnBWW4dxhNWq9ugvN5wL06QO9ONeBzuCWIHqfHGLaDyuOsOymm4eOR9yTbjHefo2wRrvm7hJ7noktifXbCKAgcZUR16p6jGWLdvug7PB5nsbEKjRPMpOSswksHr/QnOJcljYHeG0spPBdhMSjdOEQGz1dHER1YvVGyxfLXmg7NXpEXtkgUD1pb7Q2Z2x/79bHJt8EuZ9+fxbkQV1IQXmZb/wfYCRrGSV+Q0b5b7aY25OIMQzGN8tg3INKzVVnnxGD/1R3IYBCGX2R0XU71q3kGX35XSc2K3/iDJNom55fAmUpBfvDk=) 2025-09-08 00:20:41.907111 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICSegH7C07c8H06I2vGLOlRfnjLY2ceF1PdZhEviCI3k) 2025-09-08 00:20:41.907120 | orchestrator | 2025-09-08 00:20:41.907130 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-08 00:20:41.907140 | orchestrator | Monday 08 September 2025 00:20:40 +0000 (0:00:01.098) 0:00:27.503 ****** 2025-09-08 00:20:41.907151 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-08 00:20:41.907162 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-08 00:20:41.907172 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-08 00:20:41.907182 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-08 00:20:41.907210 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-08 00:20:41.907221 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-08 00:20:41.907230 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-08 00:20:41.907241 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:20:41.907251 | orchestrator | 2025-09-08 00:20:41.907261 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-08 00:20:41.907272 | orchestrator | Monday 08 September 2025 00:20:40 +0000 (0:00:00.167) 0:00:27.670 ****** 2025-09-08 00:20:41.907283 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:20:41.907295 | orchestrator | 2025-09-08 00:20:41.907307 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-08 00:20:41.907319 | orchestrator | Monday 08 September 2025 00:20:41 +0000 (0:00:00.079) 0:00:27.750 ****** 2025-09-08 00:20:41.907330 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:20:41.907341 | orchestrator | 2025-09-08 00:20:41.907353 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-08 00:20:41.907364 | orchestrator | Monday 08 September 2025 00:20:41 +0000 (0:00:00.053) 0:00:27.804 ****** 2025-09-08 00:20:41.907382 | orchestrator | changed: [testbed-manager] 2025-09-08 00:20:41.907394 | orchestrator | 2025-09-08 00:20:41.907405 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:20:41.907417 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:20:41.907431 | orchestrator | 2025-09-08 00:20:41.907441 | orchestrator | 2025-09-08 00:20:41.907453 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:20:41.907464 | orchestrator | Monday 08 September 2025 00:20:41 +0000 (0:00:00.528) 0:00:28.333 ****** 2025-09-08 00:20:41.907476 | orchestrator | =============================================================================== 2025-09-08 00:20:41.907488 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.18s 2025-09-08 00:20:41.907500 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.33s 2025-09-08 00:20:41.907512 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-09-08 00:20:41.907524 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-09-08 00:20:41.907557 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-08 00:20:41.907570 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-08 00:20:41.907582 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-08 00:20:41.907593 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-08 00:20:41.907605 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-08 00:20:41.907616 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-08 00:20:41.907627 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-08 00:20:41.907637 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-08 00:20:41.907666 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-08 00:20:41.907676 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-08 00:20:41.907686 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-08 00:20:41.907696 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-08 00:20:41.907706 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.53s 2025-09-08 00:20:41.907715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-09-08 00:20:41.907725 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-08 00:20:41.907735 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-08 00:20:42.197871 | orchestrator | + osism apply squid 2025-09-08 00:20:54.245021 | orchestrator | 2025-09-08 00:20:54 | INFO  | Task f8f81b0f-9d2b-4be2-8fba-948f15364b6f (squid) was prepared for execution. 2025-09-08 00:20:54.245130 | orchestrator | 2025-09-08 00:20:54 | INFO  | It takes a moment until task f8f81b0f-9d2b-4be2-8fba-948f15364b6f (squid) has been started and output is visible here. 2025-09-08 00:22:51.774932 | orchestrator | 2025-09-08 00:22:51.775062 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-08 00:22:51.775079 | orchestrator | 2025-09-08 00:22:51.775091 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-08 00:22:51.775103 | orchestrator | Monday 08 September 2025 00:20:58 +0000 (0:00:00.169) 0:00:00.169 ****** 2025-09-08 00:22:51.775133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:22:51.775147 | orchestrator | 2025-09-08 00:22:51.775158 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-08 00:22:51.775199 | orchestrator | Monday 08 September 2025 00:20:58 +0000 (0:00:00.085) 0:00:00.254 ****** 2025-09-08 00:22:51.775210 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.775223 | orchestrator | 2025-09-08 00:22:51.775234 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-08 00:22:51.775245 | orchestrator | Monday 08 September 2025 00:20:59 +0000 (0:00:01.431) 0:00:01.686 ****** 2025-09-08 00:22:51.775256 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-08 00:22:51.775267 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-08 00:22:51.775278 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-08 00:22:51.775289 | orchestrator | 2025-09-08 00:22:51.775300 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-08 00:22:51.775310 | orchestrator | Monday 08 September 2025 00:21:00 +0000 (0:00:01.159) 0:00:02.846 ****** 2025-09-08 00:22:51.775321 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-08 00:22:51.775333 | orchestrator | 2025-09-08 00:22:51.775344 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-08 00:22:51.775355 | orchestrator | Monday 08 September 2025 00:21:01 +0000 (0:00:01.117) 0:00:03.964 ****** 2025-09-08 00:22:51.775366 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.775376 | orchestrator | 2025-09-08 00:22:51.775387 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-08 00:22:51.775398 | orchestrator | Monday 08 September 2025 00:21:02 +0000 (0:00:00.375) 0:00:04.340 ****** 2025-09-08 00:22:51.775409 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.775420 | orchestrator | 2025-09-08 00:22:51.775431 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-08 00:22:51.775442 | orchestrator | Monday 08 September 2025 00:21:03 +0000 (0:00:00.974) 0:00:05.315 ****** 2025-09-08 00:22:51.775453 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-08 00:22:51.775467 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.775481 | orchestrator | 2025-09-08 00:22:51.775494 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-08 00:22:51.775507 | orchestrator | Monday 08 September 2025 00:21:36 +0000 (0:00:33.076) 0:00:38.391 ****** 2025-09-08 00:22:51.775520 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.775532 | orchestrator | 2025-09-08 00:22:51.775545 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-08 00:22:51.775558 | orchestrator | Monday 08 September 2025 00:21:50 +0000 (0:00:14.314) 0:00:52.706 ****** 2025-09-08 00:22:51.775571 | orchestrator | Pausing for 60 seconds 2025-09-08 00:22:51.775584 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.775598 | orchestrator | 2025-09-08 00:22:51.775612 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-08 00:22:51.775625 | orchestrator | Monday 08 September 2025 00:22:50 +0000 (0:01:00.084) 0:01:52.790 ****** 2025-09-08 00:22:51.775657 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.775670 | orchestrator | 2025-09-08 00:22:51.775683 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-08 00:22:51.775696 | orchestrator | Monday 08 September 2025 00:22:50 +0000 (0:00:00.065) 0:01:52.856 ****** 2025-09-08 00:22:51.775708 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.775721 | orchestrator | 2025-09-08 00:22:51.775734 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:22:51.775748 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:22:51.775761 | orchestrator | 2025-09-08 00:22:51.775774 | orchestrator | 2025-09-08 00:22:51.775787 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:22:51.775800 | orchestrator | Monday 08 September 2025 00:22:51 +0000 (0:00:00.702) 0:01:53.558 ****** 2025-09-08 00:22:51.775848 | orchestrator | =============================================================================== 2025-09-08 00:22:51.775871 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-08 00:22:51.775882 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.08s 2025-09-08 00:22:51.775893 | orchestrator | osism.services.squid : Restart squid service --------------------------- 14.31s 2025-09-08 00:22:51.775904 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.43s 2025-09-08 00:22:51.775915 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2025-09-08 00:22:51.775925 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2025-09-08 00:22:51.775936 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2025-09-08 00:22:51.775947 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.70s 2025-09-08 00:22:51.775958 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-09-08 00:22:51.775968 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-09-08 00:22:51.775979 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-08 00:22:52.074688 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-08 00:22:52.075150 | orchestrator | ++ semver latest 9.0.0 2025-09-08 00:22:52.125339 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-08 00:22:52.125379 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-08 00:22:52.125808 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-08 00:23:04.272677 | orchestrator | 2025-09-08 00:23:04 | INFO  | Task 12832a6d-fb79-4943-85ac-a9ae47847f98 (operator) was prepared for execution. 2025-09-08 00:23:04.272801 | orchestrator | 2025-09-08 00:23:04 | INFO  | It takes a moment until task 12832a6d-fb79-4943-85ac-a9ae47847f98 (operator) has been started and output is visible here. 2025-09-08 00:23:20.644284 | orchestrator | 2025-09-08 00:23:20.644408 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-08 00:23:20.644424 | orchestrator | 2025-09-08 00:23:20.644436 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:23:20.644448 | orchestrator | Monday 08 September 2025 00:23:08 +0000 (0:00:00.152) 0:00:00.152 ****** 2025-09-08 00:23:20.644459 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:20.644471 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:20.644482 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:20.644493 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:20.644504 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:20.644534 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:20.644546 | orchestrator | 2025-09-08 00:23:20.644557 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-08 00:23:20.644568 | orchestrator | Monday 08 September 2025 00:23:12 +0000 (0:00:03.861) 0:00:04.014 ****** 2025-09-08 00:23:20.644579 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:20.644589 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:20.644601 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:20.644612 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:20.644622 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:20.644633 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:20.644681 | orchestrator | 2025-09-08 00:23:20.644692 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-08 00:23:20.644703 | orchestrator | 2025-09-08 00:23:20.644714 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-08 00:23:20.644725 | orchestrator | Monday 08 September 2025 00:23:12 +0000 (0:00:00.806) 0:00:04.820 ****** 2025-09-08 00:23:20.644736 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:20.644747 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:20.644758 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:20.644769 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:20.644779 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:20.644790 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:20.644829 | orchestrator | 2025-09-08 00:23:20.644843 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-08 00:23:20.644856 | orchestrator | Monday 08 September 2025 00:23:13 +0000 (0:00:00.193) 0:00:05.014 ****** 2025-09-08 00:23:20.644868 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:20.644881 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:20.644893 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:20.644906 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:20.644918 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:20.644931 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:20.644944 | orchestrator | 2025-09-08 00:23:20.644957 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-08 00:23:20.644970 | orchestrator | Monday 08 September 2025 00:23:13 +0000 (0:00:00.168) 0:00:05.183 ****** 2025-09-08 00:23:20.644983 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:20.644996 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:20.645009 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:20.645022 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:20.645035 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:20.645049 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:20.645062 | orchestrator | 2025-09-08 00:23:20.645074 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-08 00:23:20.645087 | orchestrator | Monday 08 September 2025 00:23:13 +0000 (0:00:00.649) 0:00:05.832 ****** 2025-09-08 00:23:20.645100 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:20.645112 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:20.645124 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:20.645136 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:20.645149 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:20.645161 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:20.645172 | orchestrator | 2025-09-08 00:23:20.645183 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-08 00:23:20.645194 | orchestrator | Monday 08 September 2025 00:23:14 +0000 (0:00:00.921) 0:00:06.754 ****** 2025-09-08 00:23:20.645204 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-08 00:23:20.645215 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-08 00:23:20.645226 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-08 00:23:20.645237 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-08 00:23:20.645248 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-08 00:23:20.645259 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-08 00:23:20.645269 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-08 00:23:20.645280 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-08 00:23:20.645291 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-08 00:23:20.645301 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-08 00:23:20.645312 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-08 00:23:20.645323 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-08 00:23:20.645333 | orchestrator | 2025-09-08 00:23:20.645344 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-08 00:23:20.645355 | orchestrator | Monday 08 September 2025 00:23:16 +0000 (0:00:01.211) 0:00:07.966 ****** 2025-09-08 00:23:20.645366 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:20.645377 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:20.645388 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:20.645398 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:20.645409 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:20.645420 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:20.645430 | orchestrator | 2025-09-08 00:23:20.645441 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-08 00:23:20.645453 | orchestrator | Monday 08 September 2025 00:23:17 +0000 (0:00:01.193) 0:00:09.160 ****** 2025-09-08 00:23:20.645464 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-08 00:23:20.645483 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-08 00:23:20.645494 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-08 00:23:20.645505 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:20.645534 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:20.645545 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:20.645556 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:20.645567 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:20.645578 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:20.645588 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:20.645599 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:20.645610 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:20.645620 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:20.645631 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:20.645659 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:20.645670 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:20.645681 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:20.645691 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:20.645702 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:20.645712 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:20.645723 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:20.645734 | orchestrator | 2025-09-08 00:23:20.645744 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-08 00:23:20.645756 | orchestrator | Monday 08 September 2025 00:23:18 +0000 (0:00:01.312) 0:00:10.472 ****** 2025-09-08 00:23:20.645767 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:20.645778 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:20.645788 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:20.645799 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:20.645810 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:20.645820 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:20.645831 | orchestrator | 2025-09-08 00:23:20.645842 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-08 00:23:20.645853 | orchestrator | Monday 08 September 2025 00:23:18 +0000 (0:00:00.152) 0:00:10.625 ****** 2025-09-08 00:23:20.645863 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:20.645874 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:20.645885 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:20.645895 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:20.645906 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:20.645917 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:20.645927 | orchestrator | 2025-09-08 00:23:20.645938 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-08 00:23:20.645949 | orchestrator | Monday 08 September 2025 00:23:19 +0000 (0:00:00.582) 0:00:11.208 ****** 2025-09-08 00:23:20.645960 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:20.645971 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:20.645981 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:20.645992 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:20.646003 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:20.646071 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:20.646083 | orchestrator | 2025-09-08 00:23:20.646102 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-08 00:23:20.646113 | orchestrator | Monday 08 September 2025 00:23:19 +0000 (0:00:00.199) 0:00:11.407 ****** 2025-09-08 00:23:20.646124 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:23:20.646171 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:23:20.646184 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:23:20.646195 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:20.646206 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:20.646217 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:20.646228 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-08 00:23:20.646238 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:20.646249 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-08 00:23:20.646260 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:23:20.646271 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:20.646282 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:20.646292 | orchestrator | 2025-09-08 00:23:20.646303 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-08 00:23:20.646314 | orchestrator | Monday 08 September 2025 00:23:20 +0000 (0:00:00.685) 0:00:12.092 ****** 2025-09-08 00:23:20.646324 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:20.646335 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:20.646346 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:20.646356 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:20.646367 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:20.646378 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:20.646388 | orchestrator | 2025-09-08 00:23:20.646399 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-08 00:23:20.646410 | orchestrator | Monday 08 September 2025 00:23:20 +0000 (0:00:00.164) 0:00:12.257 ****** 2025-09-08 00:23:20.646420 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:20.646431 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:20.646442 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:20.646452 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:20.646471 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:20.646482 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:20.646493 | orchestrator | 2025-09-08 00:23:20.646504 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-08 00:23:20.646515 | orchestrator | Monday 08 September 2025 00:23:20 +0000 (0:00:00.150) 0:00:12.408 ****** 2025-09-08 00:23:20.646531 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:20.646542 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:20.646552 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:20.646563 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:20.646582 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:21.809524 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:21.809625 | orchestrator | 2025-09-08 00:23:21.809687 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-08 00:23:21.809702 | orchestrator | Monday 08 September 2025 00:23:20 +0000 (0:00:00.154) 0:00:12.562 ****** 2025-09-08 00:23:21.809713 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:21.809724 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:21.809735 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:21.809746 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:21.809757 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:21.809768 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:21.809779 | orchestrator | 2025-09-08 00:23:21.809791 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-08 00:23:21.809802 | orchestrator | Monday 08 September 2025 00:23:21 +0000 (0:00:00.701) 0:00:13.264 ****** 2025-09-08 00:23:21.809812 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:21.809823 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:21.809834 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:21.809872 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:21.809883 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:21.809893 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:21.809904 | orchestrator | 2025-09-08 00:23:21.809915 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:23:21.809927 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:21.809939 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:21.809950 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:21.809960 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:21.809971 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:21.809982 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:21.809992 | orchestrator | 2025-09-08 00:23:21.810004 | orchestrator | 2025-09-08 00:23:21.810068 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:23:21.810082 | orchestrator | Monday 08 September 2025 00:23:21 +0000 (0:00:00.220) 0:00:13.484 ****** 2025-09-08 00:23:21.810096 | orchestrator | =============================================================================== 2025-09-08 00:23:21.810108 | orchestrator | Gathering Facts --------------------------------------------------------- 3.86s 2025-09-08 00:23:21.810120 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2025-09-08 00:23:21.810169 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-09-08 00:23:21.810182 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2025-09-08 00:23:21.810195 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.92s 2025-09-08 00:23:21.810207 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2025-09-08 00:23:21.810220 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2025-09-08 00:23:21.810233 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-09-08 00:23:21.810246 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2025-09-08 00:23:21.810258 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-09-08 00:23:21.810271 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-09-08 00:23:21.810283 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-09-08 00:23:21.810296 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-09-08 00:23:21.810310 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-09-08 00:23:21.810323 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-09-08 00:23:21.810336 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-08 00:23:21.810349 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-09-08 00:23:21.810362 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-09-08 00:23:22.086802 | orchestrator | + osism apply --environment custom facts 2025-09-08 00:23:23.931611 | orchestrator | 2025-09-08 00:23:23 | INFO  | Trying to run play facts in environment custom 2025-09-08 00:23:34.169430 | orchestrator | 2025-09-08 00:23:34 | INFO  | Task 5d5788bf-5f06-4ed3-8824-be9f21d43892 (facts) was prepared for execution. 2025-09-08 00:23:34.169552 | orchestrator | 2025-09-08 00:23:34 | INFO  | It takes a moment until task 5d5788bf-5f06-4ed3-8824-be9f21d43892 (facts) has been started and output is visible here. 2025-09-08 00:24:17.256596 | orchestrator | 2025-09-08 00:24:17.256770 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-08 00:24:17.256788 | orchestrator | 2025-09-08 00:24:17.256801 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-08 00:24:17.256812 | orchestrator | Monday 08 September 2025 00:23:37 +0000 (0:00:00.088) 0:00:00.088 ****** 2025-09-08 00:24:17.256824 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:17.256836 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:17.256848 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:17.256859 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:17.256870 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:17.256880 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:17.256891 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:17.256902 | orchestrator | 2025-09-08 00:24:17.256913 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-08 00:24:17.256924 | orchestrator | Monday 08 September 2025 00:23:39 +0000 (0:00:01.382) 0:00:01.470 ****** 2025-09-08 00:24:17.256935 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:17.256945 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:17.256956 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:17.256967 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:17.256978 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:17.256988 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:17.256999 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:17.257010 | orchestrator | 2025-09-08 00:24:17.257021 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-08 00:24:17.257032 | orchestrator | 2025-09-08 00:24:17.257042 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-08 00:24:17.257053 | orchestrator | Monday 08 September 2025 00:23:40 +0000 (0:00:01.164) 0:00:02.635 ****** 2025-09-08 00:24:17.257064 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.257075 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.257086 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.257096 | orchestrator | 2025-09-08 00:24:17.257107 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-08 00:24:17.257119 | orchestrator | Monday 08 September 2025 00:23:40 +0000 (0:00:00.093) 0:00:02.728 ****** 2025-09-08 00:24:17.257130 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.257141 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.257151 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.257162 | orchestrator | 2025-09-08 00:24:17.257173 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-08 00:24:17.257184 | orchestrator | Monday 08 September 2025 00:23:40 +0000 (0:00:00.162) 0:00:02.890 ****** 2025-09-08 00:24:17.257195 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.257206 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.257217 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.257228 | orchestrator | 2025-09-08 00:24:17.257239 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-08 00:24:17.257250 | orchestrator | Monday 08 September 2025 00:23:40 +0000 (0:00:00.168) 0:00:03.059 ****** 2025-09-08 00:24:17.257261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:17.257274 | orchestrator | 2025-09-08 00:24:17.257285 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-08 00:24:17.257296 | orchestrator | Monday 08 September 2025 00:23:41 +0000 (0:00:00.129) 0:00:03.189 ****** 2025-09-08 00:24:17.257336 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.257347 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.257358 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.257369 | orchestrator | 2025-09-08 00:24:17.257380 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-08 00:24:17.257390 | orchestrator | Monday 08 September 2025 00:23:41 +0000 (0:00:00.406) 0:00:03.595 ****** 2025-09-08 00:24:17.257401 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:17.257412 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:17.257423 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:17.257433 | orchestrator | 2025-09-08 00:24:17.257444 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-08 00:24:17.257455 | orchestrator | Monday 08 September 2025 00:23:41 +0000 (0:00:00.094) 0:00:03.690 ****** 2025-09-08 00:24:17.257466 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:17.257476 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:17.257487 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:17.257497 | orchestrator | 2025-09-08 00:24:17.257508 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-08 00:24:17.257519 | orchestrator | Monday 08 September 2025 00:23:42 +0000 (0:00:01.002) 0:00:04.692 ****** 2025-09-08 00:24:17.257530 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.257540 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.257551 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.257562 | orchestrator | 2025-09-08 00:24:17.257572 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-08 00:24:17.257584 | orchestrator | Monday 08 September 2025 00:23:43 +0000 (0:00:00.467) 0:00:05.159 ****** 2025-09-08 00:24:17.257594 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:17.257605 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:17.257616 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:17.257626 | orchestrator | 2025-09-08 00:24:17.257658 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-08 00:24:17.257670 | orchestrator | Monday 08 September 2025 00:23:44 +0000 (0:00:01.102) 0:00:06.262 ****** 2025-09-08 00:24:17.257681 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:17.257692 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:17.257702 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:17.257713 | orchestrator | 2025-09-08 00:24:17.257723 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-08 00:24:17.257752 | orchestrator | Monday 08 September 2025 00:24:00 +0000 (0:00:16.472) 0:00:22.734 ****** 2025-09-08 00:24:17.257764 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:17.257774 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:17.257790 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:17.257801 | orchestrator | 2025-09-08 00:24:17.257812 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-08 00:24:17.257840 | orchestrator | Monday 08 September 2025 00:24:00 +0000 (0:00:00.126) 0:00:22.861 ****** 2025-09-08 00:24:17.257852 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:17.257863 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:17.257874 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:17.257885 | orchestrator | 2025-09-08 00:24:17.257895 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-08 00:24:17.257906 | orchestrator | Monday 08 September 2025 00:24:08 +0000 (0:00:07.253) 0:00:30.115 ****** 2025-09-08 00:24:17.257917 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.257928 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.257939 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.257949 | orchestrator | 2025-09-08 00:24:17.257960 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-08 00:24:17.257971 | orchestrator | Monday 08 September 2025 00:24:08 +0000 (0:00:00.437) 0:00:30.552 ****** 2025-09-08 00:24:17.257981 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-08 00:24:17.258001 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-08 00:24:17.258011 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-08 00:24:17.258070 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-08 00:24:17.258082 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-08 00:24:17.258093 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-08 00:24:17.258103 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-08 00:24:17.258114 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-08 00:24:17.258125 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-08 00:24:17.258136 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-08 00:24:17.258146 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-08 00:24:17.258157 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-08 00:24:17.258168 | orchestrator | 2025-09-08 00:24:17.258179 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-08 00:24:17.258190 | orchestrator | Monday 08 September 2025 00:24:11 +0000 (0:00:03.461) 0:00:34.014 ****** 2025-09-08 00:24:17.258201 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.258212 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.258223 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.258234 | orchestrator | 2025-09-08 00:24:17.258245 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:24:17.258255 | orchestrator | 2025-09-08 00:24:17.258267 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:24:17.258278 | orchestrator | Monday 08 September 2025 00:24:13 +0000 (0:00:01.399) 0:00:35.413 ****** 2025-09-08 00:24:17.258288 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:17.258299 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:17.258310 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:17.258321 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:17.258332 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:17.258342 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:17.258353 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:17.258364 | orchestrator | 2025-09-08 00:24:17.258375 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:24:17.258387 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:17.258398 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:17.258411 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:17.258422 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:17.258433 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:24:17.258444 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:24:17.258455 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:24:17.258466 | orchestrator | 2025-09-08 00:24:17.258477 | orchestrator | 2025-09-08 00:24:17.258488 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:24:17.258499 | orchestrator | Monday 08 September 2025 00:24:17 +0000 (0:00:03.928) 0:00:39.341 ****** 2025-09-08 00:24:17.258510 | orchestrator | =============================================================================== 2025-09-08 00:24:17.258527 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.47s 2025-09-08 00:24:17.258538 | orchestrator | Install required packages (Debian) -------------------------------------- 7.25s 2025-09-08 00:24:17.258548 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.93s 2025-09-08 00:24:17.258559 | orchestrator | Copy fact files --------------------------------------------------------- 3.46s 2025-09-08 00:24:17.258575 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.40s 2025-09-08 00:24:17.258586 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2025-09-08 00:24:17.258603 | orchestrator | Copy fact file ---------------------------------------------------------- 1.16s 2025-09-08 00:24:17.463158 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2025-09-08 00:24:17.463230 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-09-08 00:24:17.463244 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-09-08 00:24:17.463256 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-09-08 00:24:17.463268 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-09-08 00:24:17.463279 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-09-08 00:24:17.463290 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2025-09-08 00:24:17.463301 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-09-08 00:24:17.463314 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-09-08 00:24:17.463325 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-09-08 00:24:17.463335 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2025-09-08 00:24:17.754274 | orchestrator | + osism apply bootstrap 2025-09-08 00:24:29.765083 | orchestrator | 2025-09-08 00:24:29 | INFO  | Task 253e72b3-da98-4969-ba56-cfc4bf6ac6cf (bootstrap) was prepared for execution. 2025-09-08 00:24:29.765203 | orchestrator | 2025-09-08 00:24:29 | INFO  | It takes a moment until task 253e72b3-da98-4969-ba56-cfc4bf6ac6cf (bootstrap) has been started and output is visible here. 2025-09-08 00:24:45.110403 | orchestrator | 2025-09-08 00:24:45.110548 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-08 00:24:45.110567 | orchestrator | 2025-09-08 00:24:45.110579 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-08 00:24:45.110592 | orchestrator | Monday 08 September 2025 00:24:33 +0000 (0:00:00.162) 0:00:00.162 ****** 2025-09-08 00:24:45.110603 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:45.110616 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:45.110627 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:45.110681 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:45.110693 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:45.110705 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:45.110716 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:45.110727 | orchestrator | 2025-09-08 00:24:45.110738 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:24:45.110750 | orchestrator | 2025-09-08 00:24:45.110761 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:24:45.110772 | orchestrator | Monday 08 September 2025 00:24:33 +0000 (0:00:00.231) 0:00:00.394 ****** 2025-09-08 00:24:45.110783 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:45.110794 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:45.110806 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:45.110817 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:45.110828 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:45.110839 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:45.110850 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:45.110884 | orchestrator | 2025-09-08 00:24:45.110896 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-08 00:24:45.110907 | orchestrator | 2025-09-08 00:24:45.110918 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:24:45.110931 | orchestrator | Monday 08 September 2025 00:24:37 +0000 (0:00:03.604) 0:00:03.998 ****** 2025-09-08 00:24:45.110944 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-08 00:24:45.110958 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-08 00:24:45.110971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-08 00:24:45.110983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-08 00:24:45.110997 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:24:45.111010 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-08 00:24:45.111022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:24:45.111035 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-08 00:24:45.111048 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-08 00:24:45.111060 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-08 00:24:45.111073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:24:45.111087 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-08 00:24:45.111100 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-08 00:24:45.111112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-08 00:24:45.111125 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-08 00:24:45.111139 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-08 00:24:45.111151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:24:45.111163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-08 00:24:45.111176 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:45.111189 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-08 00:24:45.111201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:24:45.111214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-08 00:24:45.111226 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-08 00:24:45.111239 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-08 00:24:45.111251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:24:45.111264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-08 00:24:45.111277 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:45.111289 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-08 00:24:45.111300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:24:45.111311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-08 00:24:45.111322 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-08 00:24:45.111333 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-08 00:24:45.111344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:24:45.111355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-08 00:24:45.111366 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-08 00:24:45.111377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:24:45.111388 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:45.111399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-08 00:24:45.111410 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-08 00:24:45.111421 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-08 00:24:45.111454 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-08 00:24:45.111473 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-08 00:24:45.111485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-08 00:24:45.111496 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-08 00:24:45.111507 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:45.111519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-08 00:24:45.111549 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-08 00:24:45.111561 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-08 00:24:45.111571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-08 00:24:45.111582 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-08 00:24:45.111593 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:45.111604 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:45.111615 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-08 00:24:45.111626 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-08 00:24:45.111670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-08 00:24:45.111682 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:45.111693 | orchestrator | 2025-09-08 00:24:45.111704 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-08 00:24:45.111715 | orchestrator | 2025-09-08 00:24:45.111726 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-08 00:24:45.111737 | orchestrator | Monday 08 September 2025 00:24:37 +0000 (0:00:00.364) 0:00:04.363 ****** 2025-09-08 00:24:45.111748 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:45.111759 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:45.111769 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:45.111780 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:45.111791 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:45.111802 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:45.111813 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:45.111823 | orchestrator | 2025-09-08 00:24:45.111834 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-08 00:24:45.111845 | orchestrator | Monday 08 September 2025 00:24:39 +0000 (0:00:01.155) 0:00:05.518 ****** 2025-09-08 00:24:45.111856 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:45.111867 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:45.111878 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:45.111888 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:45.111899 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:45.111910 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:45.111921 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:45.111931 | orchestrator | 2025-09-08 00:24:45.111942 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-08 00:24:45.111953 | orchestrator | Monday 08 September 2025 00:24:40 +0000 (0:00:01.252) 0:00:06.770 ****** 2025-09-08 00:24:45.111965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:45.111978 | orchestrator | 2025-09-08 00:24:45.111989 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-08 00:24:45.112001 | orchestrator | Monday 08 September 2025 00:24:40 +0000 (0:00:00.292) 0:00:07.063 ****** 2025-09-08 00:24:45.112012 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:45.112022 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:45.112033 | orchestrator | changed: [testbed-manager] 2025-09-08 00:24:45.112044 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:45.112055 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:45.112065 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:45.112076 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:45.112087 | orchestrator | 2025-09-08 00:24:45.112105 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-08 00:24:45.112116 | orchestrator | Monday 08 September 2025 00:24:42 +0000 (0:00:01.961) 0:00:09.024 ****** 2025-09-08 00:24:45.112127 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:45.112139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:45.112151 | orchestrator | 2025-09-08 00:24:45.112168 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-08 00:24:45.112179 | orchestrator | Monday 08 September 2025 00:24:42 +0000 (0:00:00.315) 0:00:09.340 ****** 2025-09-08 00:24:45.112190 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:45.112201 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:45.112212 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:45.112223 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:45.112233 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:45.112244 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:45.112255 | orchestrator | 2025-09-08 00:24:45.112266 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-08 00:24:45.112276 | orchestrator | Monday 08 September 2025 00:24:43 +0000 (0:00:00.978) 0:00:10.318 ****** 2025-09-08 00:24:45.112287 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:45.112298 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:45.112309 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:45.112319 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:45.112330 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:45.112341 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:45.112352 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:45.112362 | orchestrator | 2025-09-08 00:24:45.112373 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-08 00:24:45.112384 | orchestrator | Monday 08 September 2025 00:24:44 +0000 (0:00:00.616) 0:00:10.935 ****** 2025-09-08 00:24:45.112395 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:45.112405 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:45.112416 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:45.112427 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:45.112438 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:45.112448 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:45.112459 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:45.112470 | orchestrator | 2025-09-08 00:24:45.112481 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-08 00:24:45.112493 | orchestrator | Monday 08 September 2025 00:24:44 +0000 (0:00:00.434) 0:00:11.369 ****** 2025-09-08 00:24:45.112504 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:45.112515 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:45.112533 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:57.255307 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:57.255428 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:57.255444 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:57.255456 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:57.255468 | orchestrator | 2025-09-08 00:24:57.255480 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-08 00:24:57.255494 | orchestrator | Monday 08 September 2025 00:24:45 +0000 (0:00:00.228) 0:00:11.598 ****** 2025-09-08 00:24:57.255507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:57.255537 | orchestrator | 2025-09-08 00:24:57.255550 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-08 00:24:57.255562 | orchestrator | Monday 08 September 2025 00:24:45 +0000 (0:00:00.289) 0:00:11.887 ****** 2025-09-08 00:24:57.255600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:57.255613 | orchestrator | 2025-09-08 00:24:57.255624 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-08 00:24:57.255684 | orchestrator | Monday 08 September 2025 00:24:45 +0000 (0:00:00.317) 0:00:12.205 ****** 2025-09-08 00:24:57.255697 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.255709 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.255720 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.255731 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.255743 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.255754 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.255765 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.255776 | orchestrator | 2025-09-08 00:24:57.255788 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-08 00:24:57.255799 | orchestrator | Monday 08 September 2025 00:24:47 +0000 (0:00:01.420) 0:00:13.625 ****** 2025-09-08 00:24:57.255811 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:57.255822 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:57.255833 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:57.255844 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:57.255856 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:57.255867 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:57.255878 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:57.255889 | orchestrator | 2025-09-08 00:24:57.255900 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-08 00:24:57.255912 | orchestrator | Monday 08 September 2025 00:24:47 +0000 (0:00:00.225) 0:00:13.850 ****** 2025-09-08 00:24:57.255923 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.255934 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.255945 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.255956 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.255967 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.255978 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.255990 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.256001 | orchestrator | 2025-09-08 00:24:57.256012 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-08 00:24:57.256023 | orchestrator | Monday 08 September 2025 00:24:48 +0000 (0:00:00.555) 0:00:14.406 ****** 2025-09-08 00:24:57.256034 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:57.256046 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:57.256057 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:57.256069 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:57.256080 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:57.256091 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:57.256102 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:57.256113 | orchestrator | 2025-09-08 00:24:57.256126 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-08 00:24:57.256139 | orchestrator | Monday 08 September 2025 00:24:48 +0000 (0:00:00.247) 0:00:14.653 ****** 2025-09-08 00:24:57.256150 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.256161 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:57.256172 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:57.256184 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:57.256195 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:57.256206 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:57.256217 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:57.256228 | orchestrator | 2025-09-08 00:24:57.256240 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-08 00:24:57.256251 | orchestrator | Monday 08 September 2025 00:24:48 +0000 (0:00:00.671) 0:00:15.325 ****** 2025-09-08 00:24:57.256271 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.256282 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:57.256293 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:57.256304 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:57.256315 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:57.256326 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:57.256338 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:57.256349 | orchestrator | 2025-09-08 00:24:57.256360 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-08 00:24:57.256371 | orchestrator | Monday 08 September 2025 00:24:50 +0000 (0:00:01.133) 0:00:16.459 ****** 2025-09-08 00:24:57.256382 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.256393 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.256405 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.256416 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.256428 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.256439 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.256450 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.256461 | orchestrator | 2025-09-08 00:24:57.256473 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-08 00:24:57.256484 | orchestrator | Monday 08 September 2025 00:24:51 +0000 (0:00:01.039) 0:00:17.499 ****** 2025-09-08 00:24:57.256514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:57.256527 | orchestrator | 2025-09-08 00:24:57.256538 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-08 00:24:57.256549 | orchestrator | Monday 08 September 2025 00:24:51 +0000 (0:00:00.307) 0:00:17.806 ****** 2025-09-08 00:24:57.256560 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:57.256571 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:57.256581 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:57.256592 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:57.256603 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:57.256614 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:57.256625 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:57.256652 | orchestrator | 2025-09-08 00:24:57.256664 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-08 00:24:57.256675 | orchestrator | Monday 08 September 2025 00:24:52 +0000 (0:00:01.319) 0:00:19.126 ****** 2025-09-08 00:24:57.256686 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.256697 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.256707 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.256718 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.256729 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.256740 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.256751 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.256761 | orchestrator | 2025-09-08 00:24:57.256772 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-08 00:24:57.256783 | orchestrator | Monday 08 September 2025 00:24:52 +0000 (0:00:00.229) 0:00:19.356 ****** 2025-09-08 00:24:57.256794 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.256805 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.256816 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.256827 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.256837 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.256848 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.256859 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.256870 | orchestrator | 2025-09-08 00:24:57.256881 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-08 00:24:57.256891 | orchestrator | Monday 08 September 2025 00:24:53 +0000 (0:00:00.214) 0:00:19.570 ****** 2025-09-08 00:24:57.256902 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.256913 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.256931 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.256942 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.256953 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.256963 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.256974 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.256985 | orchestrator | 2025-09-08 00:24:57.256996 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-08 00:24:57.257048 | orchestrator | Monday 08 September 2025 00:24:53 +0000 (0:00:00.265) 0:00:19.835 ****** 2025-09-08 00:24:57.257062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:57.257075 | orchestrator | 2025-09-08 00:24:57.257085 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-08 00:24:57.257096 | orchestrator | Monday 08 September 2025 00:24:53 +0000 (0:00:00.282) 0:00:20.117 ****** 2025-09-08 00:24:57.257107 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.257118 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.257129 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.257140 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.257151 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.257161 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.257172 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.257183 | orchestrator | 2025-09-08 00:24:57.257199 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-08 00:24:57.257210 | orchestrator | Monday 08 September 2025 00:24:54 +0000 (0:00:00.536) 0:00:20.653 ****** 2025-09-08 00:24:57.257221 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:57.257232 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:57.257243 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:57.257254 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:57.257265 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:57.257276 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:57.257287 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:57.257298 | orchestrator | 2025-09-08 00:24:57.257308 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-08 00:24:57.257319 | orchestrator | Monday 08 September 2025 00:24:54 +0000 (0:00:00.203) 0:00:20.857 ****** 2025-09-08 00:24:57.257330 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.257341 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:57.257352 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:57.257363 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.257374 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.257385 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:57.257396 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.257407 | orchestrator | 2025-09-08 00:24:57.257418 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-08 00:24:57.257429 | orchestrator | Monday 08 September 2025 00:24:55 +0000 (0:00:01.177) 0:00:22.035 ****** 2025-09-08 00:24:57.257440 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.257451 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:57.257462 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.257473 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:57.257484 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.257495 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:57.257505 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.257516 | orchestrator | 2025-09-08 00:24:57.257527 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-08 00:24:57.257538 | orchestrator | Monday 08 September 2025 00:24:56 +0000 (0:00:00.552) 0:00:22.587 ****** 2025-09-08 00:24:57.257549 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:57.257560 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:57.257571 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:57.257582 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:57.257607 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:37.798866 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:37.798982 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:37.798998 | orchestrator | 2025-09-08 00:25:37.799011 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-08 00:25:37.799025 | orchestrator | Monday 08 September 2025 00:24:57 +0000 (0:00:01.049) 0:00:23.637 ****** 2025-09-08 00:25:37.799036 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.799048 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.799059 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.799070 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:37.799081 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:37.799093 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:37.799103 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:37.799114 | orchestrator | 2025-09-08 00:25:37.799125 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-08 00:25:37.799137 | orchestrator | Monday 08 September 2025 00:25:14 +0000 (0:00:16.917) 0:00:40.555 ****** 2025-09-08 00:25:37.799148 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.799159 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.799169 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.799180 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.799191 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.799202 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.799213 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.799223 | orchestrator | 2025-09-08 00:25:37.799234 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-08 00:25:37.799245 | orchestrator | Monday 08 September 2025 00:25:14 +0000 (0:00:00.207) 0:00:40.762 ****** 2025-09-08 00:25:37.799256 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.799267 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.799278 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.799288 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.799299 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.799310 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.799321 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.799331 | orchestrator | 2025-09-08 00:25:37.799342 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-08 00:25:37.799355 | orchestrator | Monday 08 September 2025 00:25:14 +0000 (0:00:00.290) 0:00:41.052 ****** 2025-09-08 00:25:37.799368 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.799381 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.799394 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.799407 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.799420 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.799432 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.799445 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.799457 | orchestrator | 2025-09-08 00:25:37.799471 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-08 00:25:37.799483 | orchestrator | Monday 08 September 2025 00:25:14 +0000 (0:00:00.240) 0:00:41.292 ****** 2025-09-08 00:25:37.799498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:37.799514 | orchestrator | 2025-09-08 00:25:37.799527 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-08 00:25:37.799540 | orchestrator | Monday 08 September 2025 00:25:15 +0000 (0:00:00.293) 0:00:41.586 ****** 2025-09-08 00:25:37.799551 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.799562 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.799573 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.799584 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.799595 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.799605 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.799616 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.799677 | orchestrator | 2025-09-08 00:25:37.799690 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-08 00:25:37.799700 | orchestrator | Monday 08 September 2025 00:25:16 +0000 (0:00:01.332) 0:00:42.918 ****** 2025-09-08 00:25:37.799727 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:37.799738 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:37.799749 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:37.799760 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:37.799771 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:37.799782 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:37.799793 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:37.799803 | orchestrator | 2025-09-08 00:25:37.799814 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-08 00:25:37.799825 | orchestrator | Monday 08 September 2025 00:25:17 +0000 (0:00:00.979) 0:00:43.898 ****** 2025-09-08 00:25:37.799836 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.799847 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.799857 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.799868 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.799879 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.799889 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.799900 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.799911 | orchestrator | 2025-09-08 00:25:37.799921 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-08 00:25:37.799932 | orchestrator | Monday 08 September 2025 00:25:18 +0000 (0:00:00.748) 0:00:44.647 ****** 2025-09-08 00:25:37.799944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:37.799957 | orchestrator | 2025-09-08 00:25:37.799968 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-08 00:25:37.799979 | orchestrator | Monday 08 September 2025 00:25:18 +0000 (0:00:00.303) 0:00:44.950 ****** 2025-09-08 00:25:37.799990 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:37.800001 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:37.800012 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:37.800023 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:37.800033 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:37.800044 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:37.800055 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:37.800066 | orchestrator | 2025-09-08 00:25:37.800094 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-08 00:25:37.800106 | orchestrator | Monday 08 September 2025 00:25:19 +0000 (0:00:00.978) 0:00:45.929 ****** 2025-09-08 00:25:37.800117 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:25:37.800128 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:25:37.800138 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:25:37.800149 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:25:37.800160 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:25:37.800171 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:25:37.800181 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:25:37.800192 | orchestrator | 2025-09-08 00:25:37.800203 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-08 00:25:37.800214 | orchestrator | Monday 08 September 2025 00:25:19 +0000 (0:00:00.308) 0:00:46.237 ****** 2025-09-08 00:25:37.800225 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:37.800236 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:37.800246 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:37.800257 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:37.800268 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:37.800279 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:37.800289 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:37.800308 | orchestrator | 2025-09-08 00:25:37.800319 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-08 00:25:37.800330 | orchestrator | Monday 08 September 2025 00:25:32 +0000 (0:00:12.589) 0:00:58.827 ****** 2025-09-08 00:25:37.800341 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.800351 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.800362 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.800373 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.800384 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.800395 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.800405 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.800416 | orchestrator | 2025-09-08 00:25:37.800427 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-08 00:25:37.800438 | orchestrator | Monday 08 September 2025 00:25:33 +0000 (0:00:01.215) 0:01:00.042 ****** 2025-09-08 00:25:37.800449 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.800460 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.800470 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.800481 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.800492 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.800503 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.800514 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.800524 | orchestrator | 2025-09-08 00:25:37.800535 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-08 00:25:37.800546 | orchestrator | Monday 08 September 2025 00:25:34 +0000 (0:00:00.880) 0:01:00.922 ****** 2025-09-08 00:25:37.800557 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.800567 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.800578 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.800589 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.800600 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.800610 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.800621 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.800632 | orchestrator | 2025-09-08 00:25:37.800659 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-08 00:25:37.800670 | orchestrator | Monday 08 September 2025 00:25:34 +0000 (0:00:00.211) 0:01:01.134 ****** 2025-09-08 00:25:37.800681 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.800692 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.800703 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.800714 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.800725 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.800736 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.800747 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.800757 | orchestrator | 2025-09-08 00:25:37.800768 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-08 00:25:37.800779 | orchestrator | Monday 08 September 2025 00:25:34 +0000 (0:00:00.211) 0:01:01.346 ****** 2025-09-08 00:25:37.800790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:37.800802 | orchestrator | 2025-09-08 00:25:37.800813 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-08 00:25:37.800824 | orchestrator | Monday 08 September 2025 00:25:35 +0000 (0:00:00.294) 0:01:01.640 ****** 2025-09-08 00:25:37.800835 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.800846 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.800856 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.800867 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.800878 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.800889 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.800900 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.800911 | orchestrator | 2025-09-08 00:25:37.800921 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-08 00:25:37.800932 | orchestrator | Monday 08 September 2025 00:25:36 +0000 (0:00:01.712) 0:01:03.353 ****** 2025-09-08 00:25:37.800950 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:37.800961 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:37.800972 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:37.800983 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:37.800994 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:37.801004 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:37.801015 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:37.801026 | orchestrator | 2025-09-08 00:25:37.801037 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-08 00:25:37.801048 | orchestrator | Monday 08 September 2025 00:25:37 +0000 (0:00:00.557) 0:01:03.910 ****** 2025-09-08 00:25:37.801059 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:37.801070 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:37.801081 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:37.801091 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:37.801102 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:37.801113 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:37.801124 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:37.801135 | orchestrator | 2025-09-08 00:25:37.801152 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-08 00:27:59.721419 | orchestrator | Monday 08 September 2025 00:25:37 +0000 (0:00:00.266) 0:01:04.177 ****** 2025-09-08 00:27:59.721543 | orchestrator | ok: [testbed-manager] 2025-09-08 00:27:59.721560 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:27:59.721572 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:27:59.721584 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:27:59.721595 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:27:59.721606 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:27:59.721617 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:27:59.721680 | orchestrator | 2025-09-08 00:27:59.721695 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-08 00:27:59.721706 | orchestrator | Monday 08 September 2025 00:25:39 +0000 (0:00:01.244) 0:01:05.422 ****** 2025-09-08 00:27:59.721718 | orchestrator | changed: [testbed-manager] 2025-09-08 00:27:59.721729 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:27:59.721740 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:27:59.721751 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:27:59.721762 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:27:59.721773 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:27:59.721784 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:27:59.721795 | orchestrator | 2025-09-08 00:27:59.721807 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-08 00:27:59.721818 | orchestrator | Monday 08 September 2025 00:25:40 +0000 (0:00:01.805) 0:01:07.227 ****** 2025-09-08 00:27:59.721829 | orchestrator | ok: [testbed-manager] 2025-09-08 00:27:59.721840 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:27:59.721851 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:27:59.721862 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:27:59.721872 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:27:59.721883 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:27:59.721894 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:27:59.721905 | orchestrator | 2025-09-08 00:27:59.721916 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-08 00:27:59.721927 | orchestrator | Monday 08 September 2025 00:25:43 +0000 (0:00:02.578) 0:01:09.806 ****** 2025-09-08 00:27:59.721939 | orchestrator | ok: [testbed-manager] 2025-09-08 00:27:59.721953 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:27:59.721965 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:27:59.721979 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:27:59.722010 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:27:59.722080 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:27:59.722094 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:27:59.722106 | orchestrator | 2025-09-08 00:27:59.722120 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-08 00:27:59.722158 | orchestrator | Monday 08 September 2025 00:26:20 +0000 (0:00:37.302) 0:01:47.108 ****** 2025-09-08 00:27:59.722171 | orchestrator | changed: [testbed-manager] 2025-09-08 00:27:59.722183 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:27:59.722196 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:27:59.722209 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:27:59.722222 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:27:59.722234 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:27:59.722247 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:27:59.722260 | orchestrator | 2025-09-08 00:27:59.722274 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-08 00:27:59.722287 | orchestrator | Monday 08 September 2025 00:27:42 +0000 (0:01:21.567) 0:03:08.676 ****** 2025-09-08 00:27:59.722300 | orchestrator | ok: [testbed-manager] 2025-09-08 00:27:59.722311 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:27:59.722322 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:27:59.722333 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:27:59.722344 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:27:59.722354 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:27:59.722365 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:27:59.722376 | orchestrator | 2025-09-08 00:27:59.722386 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-08 00:27:59.722398 | orchestrator | Monday 08 September 2025 00:27:43 +0000 (0:00:01.522) 0:03:10.199 ****** 2025-09-08 00:27:59.722409 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:27:59.722420 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:27:59.722431 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:27:59.722447 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:27:59.722458 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:27:59.722469 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:27:59.722480 | orchestrator | changed: [testbed-manager] 2025-09-08 00:27:59.722490 | orchestrator | 2025-09-08 00:27:59.722501 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-08 00:27:59.722512 | orchestrator | Monday 08 September 2025 00:27:55 +0000 (0:00:11.443) 0:03:21.643 ****** 2025-09-08 00:27:59.722532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-08 00:27:59.722555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-08 00:27:59.722591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-08 00:27:59.722605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-08 00:27:59.722624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-08 00:27:59.722655 | orchestrator | 2025-09-08 00:27:59.722667 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-08 00:27:59.722678 | orchestrator | Monday 08 September 2025 00:27:55 +0000 (0:00:00.311) 0:03:21.954 ****** 2025-09-08 00:27:59.722689 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:27:59.722700 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:27:59.722711 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:27:59.722722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:27:59.722733 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:27:59.722743 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:27:59.722754 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:27:59.722765 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:27:59.722776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:27:59.722787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:27:59.722798 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:27:59.722809 | orchestrator | 2025-09-08 00:27:59.722819 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-08 00:27:59.722830 | orchestrator | Monday 08 September 2025 00:27:56 +0000 (0:00:00.595) 0:03:22.550 ****** 2025-09-08 00:27:59.722841 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:27:59.722853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:27:59.722864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:27:59.722875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:27:59.722886 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:27:59.722902 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:27:59.722913 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:27:59.722924 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:27:59.722935 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:27:59.722946 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:27:59.722957 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:27:59.722967 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:27:59.722978 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:27:59.722989 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:27:59.723000 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:27:59.723011 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:27:59.723022 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:27:59.723039 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:27:59.723050 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:27:59.723061 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:27:59.723072 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:27:59.723090 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:28:01.824384 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:28:01.824489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:28:01.824503 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:28:01.824515 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:28:01.824527 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:28:01.824539 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:01.824551 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:28:01.824562 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:28:01.824573 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:28:01.824584 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:28:01.824595 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:01.824606 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:28:01.824617 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:28:01.824676 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:28:01.824689 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:28:01.824700 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:28:01.824711 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:28:01.824722 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:28:01.824733 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:28:01.824744 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:28:01.824756 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:28:01.824767 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:01.824778 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-08 00:28:01.824789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-08 00:28:01.824800 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-08 00:28:01.824810 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-08 00:28:01.824821 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-08 00:28:01.824833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-08 00:28:01.824844 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-08 00:28:01.824882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-08 00:28:01.824894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-08 00:28:01.824905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-08 00:28:01.824918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-08 00:28:01.824931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-08 00:28:01.824945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-08 00:28:01.824958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-08 00:28:01.824971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-08 00:28:01.824984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-08 00:28:01.824997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-08 00:28:01.825009 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-08 00:28:01.825023 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-08 00:28:01.825035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-08 00:28:01.825049 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-08 00:28:01.825080 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-08 00:28:01.825094 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-08 00:28:01.825107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-08 00:28:01.825120 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-08 00:28:01.825133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-08 00:28:01.825146 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-08 00:28:01.825158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-08 00:28:01.825172 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-08 00:28:01.825184 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-08 00:28:01.825198 | orchestrator | 2025-09-08 00:28:01.825212 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-08 00:28:01.825226 | orchestrator | Monday 08 September 2025 00:27:59 +0000 (0:00:03.550) 0:03:26.100 ****** 2025-09-08 00:28:01.825239 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:01.825252 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:01.825265 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:01.825276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:01.825287 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:01.825297 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:01.825308 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:01.825319 | orchestrator | 2025-09-08 00:28:01.825330 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-08 00:28:01.825348 | orchestrator | Monday 08 September 2025 00:28:00 +0000 (0:00:00.576) 0:03:26.676 ****** 2025-09-08 00:28:01.825359 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:01.825371 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:01.825382 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:01.825393 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:01.825404 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:01.825415 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:01.825426 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:01.825437 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:01.825466 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-08 00:28:01.825478 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-08 00:28:01.825494 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-08 00:28:01.825506 | orchestrator | 2025-09-08 00:28:01.825517 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-08 00:28:01.825528 | orchestrator | Monday 08 September 2025 00:28:00 +0000 (0:00:00.611) 0:03:27.288 ****** 2025-09-08 00:28:01.825539 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:01.825550 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:01.825561 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:01.825572 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:01.825583 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:01.825594 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:01.825605 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:01.825616 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:01.825648 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-08 00:28:01.825660 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-08 00:28:01.825671 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-08 00:28:01.825681 | orchestrator | 2025-09-08 00:28:01.825693 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-08 00:28:01.825703 | orchestrator | Monday 08 September 2025 00:28:01 +0000 (0:00:00.655) 0:03:27.944 ****** 2025-09-08 00:28:01.825714 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:01.825725 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:01.825737 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:01.825748 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:01.825759 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:01.825776 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:14.256035 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:14.256161 | orchestrator | 2025-09-08 00:28:14.256179 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-08 00:28:14.256193 | orchestrator | Monday 08 September 2025 00:28:01 +0000 (0:00:00.266) 0:03:28.210 ****** 2025-09-08 00:28:14.256205 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:14.256218 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:14.256229 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:14.256241 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:14.256277 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:14.256289 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:14.256300 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:14.256310 | orchestrator | 2025-09-08 00:28:14.256322 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-08 00:28:14.256333 | orchestrator | Monday 08 September 2025 00:28:07 +0000 (0:00:05.743) 0:03:33.953 ****** 2025-09-08 00:28:14.256344 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-08 00:28:14.256355 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-08 00:28:14.256366 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:14.256377 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-08 00:28:14.256388 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:14.256416 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-08 00:28:14.256438 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:14.256449 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:14.256460 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-08 00:28:14.256470 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:14.256481 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-08 00:28:14.256496 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:14.256507 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-08 00:28:14.256518 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:14.256529 | orchestrator | 2025-09-08 00:28:14.256540 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-08 00:28:14.256551 | orchestrator | Monday 08 September 2025 00:28:07 +0000 (0:00:00.313) 0:03:34.266 ****** 2025-09-08 00:28:14.256562 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-08 00:28:14.256573 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-08 00:28:14.256584 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-08 00:28:14.256595 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-08 00:28:14.256606 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-08 00:28:14.256617 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-08 00:28:14.256647 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-08 00:28:14.256659 | orchestrator | 2025-09-08 00:28:14.256670 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-08 00:28:14.256681 | orchestrator | Monday 08 September 2025 00:28:09 +0000 (0:00:01.776) 0:03:36.043 ****** 2025-09-08 00:28:14.256694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:28:14.256708 | orchestrator | 2025-09-08 00:28:14.256720 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-08 00:28:14.256731 | orchestrator | Monday 08 September 2025 00:28:10 +0000 (0:00:00.495) 0:03:36.539 ****** 2025-09-08 00:28:14.256742 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:14.256753 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:14.256764 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:14.256775 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:14.256786 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:14.256796 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:14.256807 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:14.256818 | orchestrator | 2025-09-08 00:28:14.256845 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-08 00:28:14.256857 | orchestrator | Monday 08 September 2025 00:28:11 +0000 (0:00:01.237) 0:03:37.776 ****** 2025-09-08 00:28:14.256868 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:14.256879 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:14.256890 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:14.256901 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:14.256912 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:14.256923 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:14.256942 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:14.256953 | orchestrator | 2025-09-08 00:28:14.256965 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-08 00:28:14.256976 | orchestrator | Monday 08 September 2025 00:28:11 +0000 (0:00:00.591) 0:03:38.368 ****** 2025-09-08 00:28:14.256987 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:14.256998 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:14.257009 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:14.257020 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:14.257031 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:14.257042 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:14.257052 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:14.257063 | orchestrator | 2025-09-08 00:28:14.257074 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-08 00:28:14.257085 | orchestrator | Monday 08 September 2025 00:28:12 +0000 (0:00:00.740) 0:03:39.108 ****** 2025-09-08 00:28:14.257096 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:14.257107 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:14.257118 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:14.257129 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:14.257140 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:14.257151 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:14.257162 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:14.257173 | orchestrator | 2025-09-08 00:28:14.257183 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-08 00:28:14.257194 | orchestrator | Monday 08 September 2025 00:28:13 +0000 (0:00:00.563) 0:03:39.672 ****** 2025-09-08 00:28:14.257229 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289886.5953145, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:14.257245 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289890.6663957, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:14.257258 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289898.8057873, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:14.257269 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289919.4991405, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:14.257286 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289899.3796408, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:14.257306 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289889.991445, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:14.257318 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289888.6610966, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:14.257348 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:30.419842 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:30.419985 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:30.420003 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:30.420928 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:30.420963 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:30.420976 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:30.420988 | orchestrator | 2025-09-08 00:28:30.421002 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-08 00:28:30.421014 | orchestrator | Monday 08 September 2025 00:28:14 +0000 (0:00:00.962) 0:03:40.635 ****** 2025-09-08 00:28:30.421026 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:30.421037 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:30.421048 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:30.421059 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:30.421069 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:30.421080 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:30.421091 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:30.421102 | orchestrator | 2025-09-08 00:28:30.421113 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-08 00:28:30.421124 | orchestrator | Monday 08 September 2025 00:28:15 +0000 (0:00:01.117) 0:03:41.753 ****** 2025-09-08 00:28:30.421134 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:30.421145 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:30.421156 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:30.421166 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:30.421199 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:30.421211 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:30.421222 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:30.421232 | orchestrator | 2025-09-08 00:28:30.421243 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-08 00:28:30.421254 | orchestrator | Monday 08 September 2025 00:28:16 +0000 (0:00:01.148) 0:03:42.901 ****** 2025-09-08 00:28:30.421265 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:30.421276 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:30.421287 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:30.421297 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:30.421308 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:30.421319 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:30.421330 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:30.421340 | orchestrator | 2025-09-08 00:28:30.421351 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-08 00:28:30.421362 | orchestrator | Monday 08 September 2025 00:28:17 +0000 (0:00:01.144) 0:03:44.045 ****** 2025-09-08 00:28:30.421389 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:30.421400 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:30.421411 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:30.421439 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:30.421451 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:30.421462 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:30.421473 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:30.421483 | orchestrator | 2025-09-08 00:28:30.421494 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-08 00:28:30.421506 | orchestrator | Monday 08 September 2025 00:28:17 +0000 (0:00:00.253) 0:03:44.298 ****** 2025-09-08 00:28:30.421517 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:30.421530 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:30.421541 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:30.421551 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:30.421562 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:30.421573 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:30.421583 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:30.421594 | orchestrator | 2025-09-08 00:28:30.421605 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-08 00:28:30.421616 | orchestrator | Monday 08 September 2025 00:28:18 +0000 (0:00:00.745) 0:03:45.044 ****** 2025-09-08 00:28:30.421668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:28:30.421684 | orchestrator | 2025-09-08 00:28:30.421695 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-08 00:28:30.421706 | orchestrator | Monday 08 September 2025 00:28:19 +0000 (0:00:00.423) 0:03:45.467 ****** 2025-09-08 00:28:30.421725 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:30.421744 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:30.421763 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:30.421782 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:30.421798 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:30.421809 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:30.421820 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:30.421830 | orchestrator | 2025-09-08 00:28:30.421841 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-08 00:28:30.421853 | orchestrator | Monday 08 September 2025 00:28:27 +0000 (0:00:08.106) 0:03:53.574 ****** 2025-09-08 00:28:30.421864 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:30.421881 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:30.421892 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:30.421903 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:30.421914 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:30.421925 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:30.421935 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:30.421947 | orchestrator | 2025-09-08 00:28:30.421958 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-08 00:28:30.421969 | orchestrator | Monday 08 September 2025 00:28:28 +0000 (0:00:01.212) 0:03:54.786 ****** 2025-09-08 00:28:30.421980 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:30.421991 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:30.422001 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:30.422012 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:30.422092 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:30.422104 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:30.422115 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:30.422125 | orchestrator | 2025-09-08 00:28:30.422139 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-08 00:28:30.422158 | orchestrator | Monday 08 September 2025 00:28:29 +0000 (0:00:01.019) 0:03:55.806 ****** 2025-09-08 00:28:30.422178 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:30.422209 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:30.422221 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:30.422232 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:30.422243 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:30.422253 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:30.422264 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:30.422275 | orchestrator | 2025-09-08 00:28:30.422286 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-08 00:28:30.422304 | orchestrator | Monday 08 September 2025 00:28:29 +0000 (0:00:00.448) 0:03:56.255 ****** 2025-09-08 00:28:30.422322 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:30.422339 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:30.422357 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:30.422375 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:30.422393 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:30.422411 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:30.422429 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:30.422449 | orchestrator | 2025-09-08 00:28:30.422488 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-08 00:28:30.422500 | orchestrator | Monday 08 September 2025 00:28:30 +0000 (0:00:00.268) 0:03:56.523 ****** 2025-09-08 00:28:30.422511 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:30.422522 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:30.422533 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:30.422544 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:30.422554 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:30.422577 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:40.075788 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:40.075915 | orchestrator | 2025-09-08 00:29:40.075932 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-08 00:29:40.075946 | orchestrator | Monday 08 September 2025 00:28:30 +0000 (0:00:00.279) 0:03:56.803 ****** 2025-09-08 00:29:40.075957 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:40.075969 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:40.075980 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:40.075991 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:40.076002 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:40.076013 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:40.076024 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:40.076034 | orchestrator | 2025-09-08 00:29:40.076046 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-08 00:29:40.076057 | orchestrator | Monday 08 September 2025 00:28:35 +0000 (0:00:05.574) 0:04:02.378 ****** 2025-09-08 00:29:40.076071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:29:40.076085 | orchestrator | 2025-09-08 00:29:40.076096 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-08 00:29:40.076108 | orchestrator | Monday 08 September 2025 00:28:36 +0000 (0:00:00.394) 0:04:02.773 ****** 2025-09-08 00:29:40.076119 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-08 00:29:40.076130 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-08 00:29:40.076141 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-08 00:29:40.076152 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:29:40.076163 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-08 00:29:40.076174 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:29:40.076185 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-08 00:29:40.076196 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-08 00:29:40.076207 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-08 00:29:40.076218 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-08 00:29:40.076229 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:29:40.076267 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-08 00:29:40.076282 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-08 00:29:40.076296 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:29:40.076309 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-08 00:29:40.076322 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-08 00:29:40.076334 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:29:40.076347 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:29:40.076360 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-08 00:29:40.076373 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-08 00:29:40.076387 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:29:40.076399 | orchestrator | 2025-09-08 00:29:40.076413 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-08 00:29:40.076424 | orchestrator | Monday 08 September 2025 00:28:36 +0000 (0:00:00.317) 0:04:03.090 ****** 2025-09-08 00:29:40.076451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:29:40.076463 | orchestrator | 2025-09-08 00:29:40.076474 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-08 00:29:40.076487 | orchestrator | Monday 08 September 2025 00:28:37 +0000 (0:00:00.427) 0:04:03.518 ****** 2025-09-08 00:29:40.076505 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-08 00:29:40.076524 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-08 00:29:40.076555 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:29:40.076573 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-08 00:29:40.076591 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:29:40.076699 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:29:40.076712 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-08 00:29:40.076723 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:29:40.076734 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-08 00:29:40.076745 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-08 00:29:40.076755 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:29:40.076766 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:29:40.076777 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-08 00:29:40.076787 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:29:40.076798 | orchestrator | 2025-09-08 00:29:40.076809 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-08 00:29:40.076820 | orchestrator | Monday 08 September 2025 00:28:37 +0000 (0:00:00.342) 0:04:03.860 ****** 2025-09-08 00:29:40.076830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:29:40.076842 | orchestrator | 2025-09-08 00:29:40.076852 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-08 00:29:40.076863 | orchestrator | Monday 08 September 2025 00:28:37 +0000 (0:00:00.386) 0:04:04.247 ****** 2025-09-08 00:29:40.076874 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:40.076905 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:40.076917 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:40.076928 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:40.076939 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:40.076949 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:40.076960 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:40.076971 | orchestrator | 2025-09-08 00:29:40.076982 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-08 00:29:40.077004 | orchestrator | Monday 08 September 2025 00:29:12 +0000 (0:00:34.392) 0:04:38.640 ****** 2025-09-08 00:29:40.077015 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:40.077026 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:40.077037 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:40.077047 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:40.077058 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:40.077068 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:40.077079 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:40.077090 | orchestrator | 2025-09-08 00:29:40.077101 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-08 00:29:40.077112 | orchestrator | Monday 08 September 2025 00:29:20 +0000 (0:00:08.607) 0:04:47.247 ****** 2025-09-08 00:29:40.077123 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:40.077133 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:40.077144 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:40.077155 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:40.077165 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:40.077176 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:40.077187 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:40.077197 | orchestrator | 2025-09-08 00:29:40.077208 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-08 00:29:40.077219 | orchestrator | Monday 08 September 2025 00:29:28 +0000 (0:00:07.803) 0:04:55.051 ****** 2025-09-08 00:29:40.077230 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:40.077241 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:40.077251 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:40.077262 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:40.077273 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:40.077283 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:40.077294 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:40.077304 | orchestrator | 2025-09-08 00:29:40.077315 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-08 00:29:40.077327 | orchestrator | Monday 08 September 2025 00:29:30 +0000 (0:00:01.592) 0:04:56.644 ****** 2025-09-08 00:29:40.077338 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:40.077349 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:40.077359 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:40.077370 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:40.077380 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:40.077391 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:40.077402 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:40.077412 | orchestrator | 2025-09-08 00:29:40.077423 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-08 00:29:40.077434 | orchestrator | Monday 08 September 2025 00:29:35 +0000 (0:00:05.750) 0:05:02.394 ****** 2025-09-08 00:29:40.077445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:29:40.077457 | orchestrator | 2025-09-08 00:29:40.077468 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-08 00:29:40.077486 | orchestrator | Monday 08 September 2025 00:29:36 +0000 (0:00:00.548) 0:05:02.942 ****** 2025-09-08 00:29:40.077498 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:40.077508 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:40.077519 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:40.077530 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:40.077540 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:40.077551 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:40.077562 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:40.077572 | orchestrator | 2025-09-08 00:29:40.077583 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-08 00:29:40.077601 | orchestrator | Monday 08 September 2025 00:29:37 +0000 (0:00:00.769) 0:05:03.712 ****** 2025-09-08 00:29:40.077612 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:40.077657 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:40.077670 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:40.077680 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:40.077691 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:40.077702 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:40.077712 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:40.077723 | orchestrator | 2025-09-08 00:29:40.077734 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-08 00:29:40.077745 | orchestrator | Monday 08 September 2025 00:29:38 +0000 (0:00:01.681) 0:05:05.394 ****** 2025-09-08 00:29:40.077755 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:40.077766 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:40.077777 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:40.077787 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:40.077798 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:40.077809 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:40.077819 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:40.077830 | orchestrator | 2025-09-08 00:29:40.077841 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-08 00:29:40.077852 | orchestrator | Monday 08 September 2025 00:29:39 +0000 (0:00:00.782) 0:05:06.177 ****** 2025-09-08 00:29:40.077862 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:29:40.077873 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:29:40.077884 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:29:40.077895 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:29:40.077905 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:29:40.077916 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:29:40.077927 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:29:40.077937 | orchestrator | 2025-09-08 00:29:40.077948 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-08 00:29:40.077967 | orchestrator | Monday 08 September 2025 00:29:40 +0000 (0:00:00.277) 0:05:06.454 ****** 2025-09-08 00:30:05.662515 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:05.662617 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:05.662681 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:05.662693 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:05.662704 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:05.662716 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:05.662726 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:05.662738 | orchestrator | 2025-09-08 00:30:05.662750 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-08 00:30:05.662761 | orchestrator | Monday 08 September 2025 00:29:40 +0000 (0:00:00.394) 0:05:06.849 ****** 2025-09-08 00:30:05.662773 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:05.662784 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:05.662795 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:05.662806 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:05.662817 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:05.662828 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:05.662839 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:05.662850 | orchestrator | 2025-09-08 00:30:05.662861 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-08 00:30:05.662872 | orchestrator | Monday 08 September 2025 00:29:40 +0000 (0:00:00.317) 0:05:07.167 ****** 2025-09-08 00:30:05.662883 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:05.662894 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:05.662905 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:05.662916 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:05.662927 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:05.662938 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:05.662949 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:05.662982 | orchestrator | 2025-09-08 00:30:05.662994 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-08 00:30:05.663005 | orchestrator | Monday 08 September 2025 00:29:41 +0000 (0:00:00.248) 0:05:07.415 ****** 2025-09-08 00:30:05.663016 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:05.663027 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:05.663038 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:05.663049 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:05.663060 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:05.663073 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:05.663087 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:05.663100 | orchestrator | 2025-09-08 00:30:05.663113 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-08 00:30:05.663125 | orchestrator | Monday 08 September 2025 00:29:41 +0000 (0:00:00.314) 0:05:07.730 ****** 2025-09-08 00:30:05.663138 | orchestrator | ok: [testbed-manager] =>  2025-09-08 00:30:05.663151 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:05.663164 | orchestrator | ok: [testbed-node-0] =>  2025-09-08 00:30:05.663176 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:05.663188 | orchestrator | ok: [testbed-node-1] =>  2025-09-08 00:30:05.663201 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:05.663213 | orchestrator | ok: [testbed-node-2] =>  2025-09-08 00:30:05.663226 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:05.663239 | orchestrator | ok: [testbed-node-3] =>  2025-09-08 00:30:05.663251 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:05.663264 | orchestrator | ok: [testbed-node-4] =>  2025-09-08 00:30:05.663277 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:05.663289 | orchestrator | ok: [testbed-node-5] =>  2025-09-08 00:30:05.663302 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:05.663315 | orchestrator | 2025-09-08 00:30:05.663328 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-08 00:30:05.663341 | orchestrator | Monday 08 September 2025 00:29:41 +0000 (0:00:00.318) 0:05:08.048 ****** 2025-09-08 00:30:05.663354 | orchestrator | ok: [testbed-manager] =>  2025-09-08 00:30:05.663367 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:05.663379 | orchestrator | ok: [testbed-node-0] =>  2025-09-08 00:30:05.663392 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:05.663405 | orchestrator | ok: [testbed-node-1] =>  2025-09-08 00:30:05.663418 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:05.663429 | orchestrator | ok: [testbed-node-2] =>  2025-09-08 00:30:05.663440 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:05.663451 | orchestrator | ok: [testbed-node-3] =>  2025-09-08 00:30:05.663461 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:05.663472 | orchestrator | ok: [testbed-node-4] =>  2025-09-08 00:30:05.663483 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:05.663494 | orchestrator | ok: [testbed-node-5] =>  2025-09-08 00:30:05.663504 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:05.663515 | orchestrator | 2025-09-08 00:30:05.663526 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-08 00:30:05.663537 | orchestrator | Monday 08 September 2025 00:29:41 +0000 (0:00:00.285) 0:05:08.333 ****** 2025-09-08 00:30:05.663548 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:05.663559 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:05.663570 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:05.663581 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:05.663591 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:05.663602 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:05.663613 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:05.663639 | orchestrator | 2025-09-08 00:30:05.663651 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-08 00:30:05.663662 | orchestrator | Monday 08 September 2025 00:29:42 +0000 (0:00:00.300) 0:05:08.634 ****** 2025-09-08 00:30:05.663673 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:05.663691 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:05.663702 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:05.663713 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:05.663724 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:05.663735 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:05.663746 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:05.663756 | orchestrator | 2025-09-08 00:30:05.663768 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-08 00:30:05.663779 | orchestrator | Monday 08 September 2025 00:29:42 +0000 (0:00:00.271) 0:05:08.906 ****** 2025-09-08 00:30:05.663806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:30:05.663820 | orchestrator | 2025-09-08 00:30:05.663832 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-08 00:30:05.663843 | orchestrator | Monday 08 September 2025 00:29:42 +0000 (0:00:00.442) 0:05:09.349 ****** 2025-09-08 00:30:05.663854 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:05.663865 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:05.663876 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:05.663887 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:05.663898 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:05.663909 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:05.663920 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:05.663931 | orchestrator | 2025-09-08 00:30:05.663942 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-08 00:30:05.663953 | orchestrator | Monday 08 September 2025 00:29:43 +0000 (0:00:00.788) 0:05:10.137 ****** 2025-09-08 00:30:05.663964 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:05.663975 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:05.663986 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:05.663997 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:05.664007 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:05.664018 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:05.664029 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:05.664040 | orchestrator | 2025-09-08 00:30:05.664051 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-08 00:30:05.664063 | orchestrator | Monday 08 September 2025 00:29:46 +0000 (0:00:03.027) 0:05:13.164 ****** 2025-09-08 00:30:05.664074 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-08 00:30:05.664085 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-08 00:30:05.664096 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-08 00:30:05.664107 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-08 00:30:05.664134 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-08 00:30:05.664145 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-08 00:30:05.664156 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:05.664167 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-08 00:30:05.664178 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-08 00:30:05.664189 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-08 00:30:05.664200 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:05.664211 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-08 00:30:05.664222 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-08 00:30:05.664233 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-08 00:30:05.664244 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:05.664254 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-08 00:30:05.664265 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-08 00:30:05.664276 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-08 00:30:05.664293 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:05.664304 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-08 00:30:05.664315 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-08 00:30:05.664326 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-08 00:30:05.664337 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:05.664348 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:05.664363 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-08 00:30:05.664374 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-08 00:30:05.664385 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-08 00:30:05.664396 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:05.664407 | orchestrator | 2025-09-08 00:30:05.664418 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-08 00:30:05.664429 | orchestrator | Monday 08 September 2025 00:29:47 +0000 (0:00:00.592) 0:05:13.756 ****** 2025-09-08 00:30:05.664440 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:05.664451 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:05.664462 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:05.664473 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:05.664484 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:05.664495 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:05.664505 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:05.664516 | orchestrator | 2025-09-08 00:30:05.664527 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-08 00:30:05.664538 | orchestrator | Monday 08 September 2025 00:29:53 +0000 (0:00:06.196) 0:05:19.953 ****** 2025-09-08 00:30:05.664549 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:05.664560 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:05.664571 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:05.664582 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:05.664593 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:05.664604 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:05.664615 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:05.664640 | orchestrator | 2025-09-08 00:30:05.664651 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-08 00:30:05.664662 | orchestrator | Monday 08 September 2025 00:29:54 +0000 (0:00:01.195) 0:05:21.148 ****** 2025-09-08 00:30:05.664673 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:05.664684 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:05.664695 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:05.664706 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:05.664717 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:05.664728 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:05.664739 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:05.664750 | orchestrator | 2025-09-08 00:30:05.664761 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-08 00:30:05.664772 | orchestrator | Monday 08 September 2025 00:30:02 +0000 (0:00:07.674) 0:05:28.823 ****** 2025-09-08 00:30:05.664783 | orchestrator | changed: [testbed-manager] 2025-09-08 00:30:05.664794 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:05.664804 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:05.664822 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.368728 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.368855 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.368872 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.368884 | orchestrator | 2025-09-08 00:30:48.368897 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-08 00:30:48.368911 | orchestrator | Monday 08 September 2025 00:30:05 +0000 (0:00:03.217) 0:05:32.041 ****** 2025-09-08 00:30:48.368922 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:48.368934 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:48.368945 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:48.368984 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.368996 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.369006 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.369017 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.369028 | orchestrator | 2025-09-08 00:30:48.369039 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-08 00:30:48.369049 | orchestrator | Monday 08 September 2025 00:30:06 +0000 (0:00:01.271) 0:05:33.312 ****** 2025-09-08 00:30:48.369060 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:48.369071 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:48.369081 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:48.369092 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.369103 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.369113 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.369124 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.369134 | orchestrator | 2025-09-08 00:30:48.369145 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-08 00:30:48.369156 | orchestrator | Monday 08 September 2025 00:30:08 +0000 (0:00:01.501) 0:05:34.813 ****** 2025-09-08 00:30:48.369166 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:48.369177 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:48.369187 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:48.369200 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:48.369213 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:48.369225 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:48.369237 | orchestrator | changed: [testbed-manager] 2025-09-08 00:30:48.369276 | orchestrator | 2025-09-08 00:30:48.369290 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-08 00:30:48.369303 | orchestrator | Monday 08 September 2025 00:30:08 +0000 (0:00:00.568) 0:05:35.382 ****** 2025-09-08 00:30:48.369315 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:48.369328 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:48.369340 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:48.369353 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.369365 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.369377 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.369390 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.369403 | orchestrator | 2025-09-08 00:30:48.369415 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-08 00:30:48.369428 | orchestrator | Monday 08 September 2025 00:30:18 +0000 (0:00:09.703) 0:05:45.085 ****** 2025-09-08 00:30:48.369440 | orchestrator | changed: [testbed-manager] 2025-09-08 00:30:48.369453 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:48.369465 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:48.369477 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.369490 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.369502 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.369515 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.369528 | orchestrator | 2025-09-08 00:30:48.369541 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-08 00:30:48.369569 | orchestrator | Monday 08 September 2025 00:30:19 +0000 (0:00:00.907) 0:05:45.993 ****** 2025-09-08 00:30:48.369581 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:48.369592 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.369602 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:48.369613 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:48.369624 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.369653 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.369664 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.369675 | orchestrator | 2025-09-08 00:30:48.369685 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-08 00:30:48.369697 | orchestrator | Monday 08 September 2025 00:30:27 +0000 (0:00:08.362) 0:05:54.355 ****** 2025-09-08 00:30:48.369716 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:48.369726 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:48.369737 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:48.369748 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.369759 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.369770 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.369780 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.369791 | orchestrator | 2025-09-08 00:30:48.369802 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-08 00:30:48.369813 | orchestrator | Monday 08 September 2025 00:30:38 +0000 (0:00:10.501) 0:06:04.857 ****** 2025-09-08 00:30:48.369824 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-08 00:30:48.369835 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-08 00:30:48.369846 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-08 00:30:48.369857 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-08 00:30:48.369868 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-08 00:30:48.369878 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-08 00:30:48.369889 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-08 00:30:48.369900 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-08 00:30:48.369911 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-08 00:30:48.369921 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-08 00:30:48.369932 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-08 00:30:48.369943 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-08 00:30:48.369954 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-08 00:30:48.369965 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-08 00:30:48.369976 | orchestrator | 2025-09-08 00:30:48.369987 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-08 00:30:48.370071 | orchestrator | Monday 08 September 2025 00:30:39 +0000 (0:00:01.166) 0:06:06.023 ****** 2025-09-08 00:30:48.370087 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:48.370098 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:48.370109 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:48.370120 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:48.370131 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:48.370142 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:48.370153 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:48.370164 | orchestrator | 2025-09-08 00:30:48.370175 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-08 00:30:48.370186 | orchestrator | Monday 08 September 2025 00:30:40 +0000 (0:00:00.550) 0:06:06.574 ****** 2025-09-08 00:30:48.370197 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:48.370208 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:48.370219 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:48.370229 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:48.370240 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:48.370251 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:48.370262 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:48.370273 | orchestrator | 2025-09-08 00:30:48.370284 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-08 00:30:48.370296 | orchestrator | Monday 08 September 2025 00:30:43 +0000 (0:00:03.719) 0:06:10.293 ****** 2025-09-08 00:30:48.370307 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:48.370318 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:48.370329 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:48.370339 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:48.370350 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:48.370361 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:48.370372 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:48.370390 | orchestrator | 2025-09-08 00:30:48.370402 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-08 00:30:48.370413 | orchestrator | Monday 08 September 2025 00:30:44 +0000 (0:00:00.486) 0:06:10.779 ****** 2025-09-08 00:30:48.370424 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-08 00:30:48.370435 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-08 00:30:48.370446 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:48.370457 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-08 00:30:48.370468 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-08 00:30:48.370479 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:48.370489 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-08 00:30:48.370500 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-08 00:30:48.370511 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:48.370522 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-08 00:30:48.370533 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-08 00:30:48.370543 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:48.370554 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-08 00:30:48.370565 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-08 00:30:48.370576 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:48.370586 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-08 00:30:48.370603 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-08 00:30:48.370614 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:48.370625 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-08 00:30:48.370652 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-08 00:30:48.370664 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:48.370674 | orchestrator | 2025-09-08 00:30:48.370685 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-08 00:30:48.370696 | orchestrator | Monday 08 September 2025 00:30:45 +0000 (0:00:00.759) 0:06:11.539 ****** 2025-09-08 00:30:48.370707 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:48.370718 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:48.370729 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:48.370740 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:48.370751 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:48.370761 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:48.370772 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:48.370783 | orchestrator | 2025-09-08 00:30:48.370794 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-08 00:30:48.370805 | orchestrator | Monday 08 September 2025 00:30:45 +0000 (0:00:00.551) 0:06:12.091 ****** 2025-09-08 00:30:48.370816 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:48.370826 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:48.370837 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:48.370848 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:48.370859 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:48.370870 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:48.370880 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:48.370891 | orchestrator | 2025-09-08 00:30:48.370902 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-08 00:30:48.370913 | orchestrator | Monday 08 September 2025 00:30:46 +0000 (0:00:00.548) 0:06:12.639 ****** 2025-09-08 00:30:48.370924 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:48.370935 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:48.370946 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:48.370956 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:48.370967 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:48.370985 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:48.370996 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:48.371006 | orchestrator | 2025-09-08 00:30:48.371017 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-08 00:30:48.371028 | orchestrator | Monday 08 September 2025 00:30:46 +0000 (0:00:00.531) 0:06:13.171 ****** 2025-09-08 00:30:48.371040 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:48.371058 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:10.198521 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:10.198694 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:10.198713 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:10.198725 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:10.198737 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:10.198748 | orchestrator | 2025-09-08 00:31:10.198761 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-08 00:31:10.198773 | orchestrator | Monday 08 September 2025 00:30:48 +0000 (0:00:01.576) 0:06:14.747 ****** 2025-09-08 00:31:10.198786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:10.198799 | orchestrator | 2025-09-08 00:31:10.198811 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-08 00:31:10.198822 | orchestrator | Monday 08 September 2025 00:30:49 +0000 (0:00:01.050) 0:06:15.797 ****** 2025-09-08 00:31:10.198833 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.198844 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.198856 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.198867 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.198878 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:10.198889 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:10.198900 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:10.198910 | orchestrator | 2025-09-08 00:31:10.198922 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-08 00:31:10.198933 | orchestrator | Monday 08 September 2025 00:30:50 +0000 (0:00:00.883) 0:06:16.680 ****** 2025-09-08 00:31:10.198944 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.198955 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.198966 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.198977 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.198988 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:10.199013 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:10.199024 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:10.199049 | orchestrator | 2025-09-08 00:31:10.199063 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-08 00:31:10.199076 | orchestrator | Monday 08 September 2025 00:30:51 +0000 (0:00:00.889) 0:06:17.570 ****** 2025-09-08 00:31:10.199089 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.199101 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.199114 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.199127 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.199139 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:10.199153 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:10.199165 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:10.199177 | orchestrator | 2025-09-08 00:31:10.199191 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-08 00:31:10.199205 | orchestrator | Monday 08 September 2025 00:30:52 +0000 (0:00:01.549) 0:06:19.119 ****** 2025-09-08 00:31:10.199216 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:10.199229 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:10.199241 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:10.199254 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:10.199267 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:10.199280 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:10.199325 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:10.199338 | orchestrator | 2025-09-08 00:31:10.199352 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-08 00:31:10.199382 | orchestrator | Monday 08 September 2025 00:30:54 +0000 (0:00:01.453) 0:06:20.573 ****** 2025-09-08 00:31:10.199394 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.199405 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.199416 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.199427 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.199437 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:10.199448 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:10.199459 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:10.199470 | orchestrator | 2025-09-08 00:31:10.199481 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-08 00:31:10.199491 | orchestrator | Monday 08 September 2025 00:30:55 +0000 (0:00:01.338) 0:06:21.912 ****** 2025-09-08 00:31:10.199503 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:10.199513 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.199524 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.199535 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.199546 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:10.199556 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:10.199567 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:10.199578 | orchestrator | 2025-09-08 00:31:10.199589 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-08 00:31:10.199600 | orchestrator | Monday 08 September 2025 00:30:56 +0000 (0:00:01.365) 0:06:23.277 ****** 2025-09-08 00:31:10.199611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:10.199622 | orchestrator | 2025-09-08 00:31:10.199633 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-08 00:31:10.199709 | orchestrator | Monday 08 September 2025 00:30:57 +0000 (0:00:01.047) 0:06:24.324 ****** 2025-09-08 00:31:10.199720 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.199731 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:10.199743 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:10.199754 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:10.199765 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:10.199776 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:10.199787 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:10.199798 | orchestrator | 2025-09-08 00:31:10.199809 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-08 00:31:10.199820 | orchestrator | Monday 08 September 2025 00:30:59 +0000 (0:00:01.344) 0:06:25.669 ****** 2025-09-08 00:31:10.199831 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.199842 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:10.199872 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:10.199884 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:10.199895 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:10.199906 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:10.199917 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:10.199928 | orchestrator | 2025-09-08 00:31:10.199939 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-08 00:31:10.199950 | orchestrator | Monday 08 September 2025 00:31:00 +0000 (0:00:01.095) 0:06:26.764 ****** 2025-09-08 00:31:10.199961 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.199972 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:10.199983 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:10.199994 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:10.200004 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:10.200015 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:10.200026 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:10.200037 | orchestrator | 2025-09-08 00:31:10.200048 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-08 00:31:10.200070 | orchestrator | Monday 08 September 2025 00:31:01 +0000 (0:00:01.121) 0:06:27.886 ****** 2025-09-08 00:31:10.200081 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.200092 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:10.200103 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:10.200113 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:10.200124 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:10.200135 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:10.200146 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:10.200157 | orchestrator | 2025-09-08 00:31:10.200168 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-08 00:31:10.200178 | orchestrator | Monday 08 September 2025 00:31:02 +0000 (0:00:01.091) 0:06:28.977 ****** 2025-09-08 00:31:10.200189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:10.200201 | orchestrator | 2025-09-08 00:31:10.200212 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:10.200222 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:01.052) 0:06:30.030 ****** 2025-09-08 00:31:10.200233 | orchestrator | 2025-09-08 00:31:10.200244 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:10.200255 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:00.045) 0:06:30.075 ****** 2025-09-08 00:31:10.200266 | orchestrator | 2025-09-08 00:31:10.200277 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:10.200288 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:00.038) 0:06:30.114 ****** 2025-09-08 00:31:10.200299 | orchestrator | 2025-09-08 00:31:10.200310 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:10.200321 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:00.037) 0:06:30.152 ****** 2025-09-08 00:31:10.200331 | orchestrator | 2025-09-08 00:31:10.200342 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:10.200353 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:00.044) 0:06:30.196 ****** 2025-09-08 00:31:10.200364 | orchestrator | 2025-09-08 00:31:10.200375 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:10.200386 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:00.039) 0:06:30.236 ****** 2025-09-08 00:31:10.200397 | orchestrator | 2025-09-08 00:31:10.200408 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:10.200419 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:00.039) 0:06:30.275 ****** 2025-09-08 00:31:10.200430 | orchestrator | 2025-09-08 00:31:10.200441 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-08 00:31:10.200452 | orchestrator | Monday 08 September 2025 00:31:03 +0000 (0:00:00.044) 0:06:30.320 ****** 2025-09-08 00:31:10.200463 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:10.200474 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:10.200485 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:10.200496 | orchestrator | 2025-09-08 00:31:10.200507 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-08 00:31:10.200518 | orchestrator | Monday 08 September 2025 00:31:05 +0000 (0:00:01.133) 0:06:31.453 ****** 2025-09-08 00:31:10.200529 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:10.200540 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.200551 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.200562 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.200573 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:10.200584 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:10.200595 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:10.200605 | orchestrator | 2025-09-08 00:31:10.200617 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-08 00:31:10.200657 | orchestrator | Monday 08 September 2025 00:31:06 +0000 (0:00:01.327) 0:06:32.781 ****** 2025-09-08 00:31:10.200669 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:10.200680 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.200691 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.200702 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.200713 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:10.200724 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:10.200735 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:10.200746 | orchestrator | 2025-09-08 00:31:10.200756 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-08 00:31:10.200767 | orchestrator | Monday 08 September 2025 00:31:09 +0000 (0:00:02.667) 0:06:35.449 ****** 2025-09-08 00:31:10.200778 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:10.200789 | orchestrator | 2025-09-08 00:31:10.200810 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-08 00:31:10.200821 | orchestrator | Monday 08 September 2025 00:31:09 +0000 (0:00:00.137) 0:06:35.586 ****** 2025-09-08 00:31:10.200832 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:10.200843 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:10.200854 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:10.200865 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:10.200882 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:35.424147 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:35.424252 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:35.424264 | orchestrator | 2025-09-08 00:31:35.424274 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-08 00:31:35.424283 | orchestrator | Monday 08 September 2025 00:31:10 +0000 (0:00:00.989) 0:06:36.575 ****** 2025-09-08 00:31:35.424293 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:35.424301 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:35.424309 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:35.424317 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:35.424325 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:35.424333 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:35.424341 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:35.424349 | orchestrator | 2025-09-08 00:31:35.424357 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-08 00:31:35.424365 | orchestrator | Monday 08 September 2025 00:31:10 +0000 (0:00:00.521) 0:06:37.096 ****** 2025-09-08 00:31:35.424374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:35.424384 | orchestrator | 2025-09-08 00:31:35.424392 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-08 00:31:35.424401 | orchestrator | Monday 08 September 2025 00:31:11 +0000 (0:00:01.081) 0:06:38.178 ****** 2025-09-08 00:31:35.424409 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.424418 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:35.424426 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:35.424434 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:35.424442 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:35.424450 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:35.424458 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:35.424465 | orchestrator | 2025-09-08 00:31:35.424473 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-08 00:31:35.424481 | orchestrator | Monday 08 September 2025 00:31:12 +0000 (0:00:00.834) 0:06:39.012 ****** 2025-09-08 00:31:35.424489 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-08 00:31:35.424497 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-08 00:31:35.424505 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-08 00:31:35.424538 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-08 00:31:35.424547 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-08 00:31:35.424554 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-08 00:31:35.424562 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-08 00:31:35.424570 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-08 00:31:35.424578 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-08 00:31:35.424586 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-08 00:31:35.424594 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-08 00:31:35.424601 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-08 00:31:35.424609 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-08 00:31:35.424629 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-08 00:31:35.424663 | orchestrator | 2025-09-08 00:31:35.424672 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-08 00:31:35.424680 | orchestrator | Monday 08 September 2025 00:31:15 +0000 (0:00:02.447) 0:06:41.460 ****** 2025-09-08 00:31:35.424689 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:35.424698 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:35.424707 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:35.424716 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:35.424726 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:35.424735 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:35.424744 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:35.424753 | orchestrator | 2025-09-08 00:31:35.424763 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-08 00:31:35.424772 | orchestrator | Monday 08 September 2025 00:31:15 +0000 (0:00:00.507) 0:06:41.968 ****** 2025-09-08 00:31:35.424782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:35.424794 | orchestrator | 2025-09-08 00:31:35.424804 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-08 00:31:35.424812 | orchestrator | Monday 08 September 2025 00:31:16 +0000 (0:00:01.055) 0:06:43.023 ****** 2025-09-08 00:31:35.424822 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.424832 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:35.424841 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:35.424850 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:35.424859 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:35.424867 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:35.424876 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:35.424885 | orchestrator | 2025-09-08 00:31:35.424895 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-08 00:31:35.424904 | orchestrator | Monday 08 September 2025 00:31:17 +0000 (0:00:00.808) 0:06:43.832 ****** 2025-09-08 00:31:35.424914 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.424923 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:35.424932 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:35.424941 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:35.424950 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:35.424959 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:35.424968 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:35.424976 | orchestrator | 2025-09-08 00:31:35.424985 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-08 00:31:35.425010 | orchestrator | Monday 08 September 2025 00:31:18 +0000 (0:00:00.840) 0:06:44.673 ****** 2025-09-08 00:31:35.425020 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:35.425029 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:35.425038 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:35.425058 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:35.425067 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:35.425075 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:35.425082 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:35.425090 | orchestrator | 2025-09-08 00:31:35.425098 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-08 00:31:35.425106 | orchestrator | Monday 08 September 2025 00:31:18 +0000 (0:00:00.495) 0:06:45.168 ****** 2025-09-08 00:31:35.425114 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.425121 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:35.425129 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:35.425137 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:35.425144 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:35.425152 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:35.425160 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:35.425167 | orchestrator | 2025-09-08 00:31:35.425175 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-08 00:31:35.425183 | orchestrator | Monday 08 September 2025 00:31:20 +0000 (0:00:01.657) 0:06:46.825 ****** 2025-09-08 00:31:35.425191 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:35.425198 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:35.425206 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:35.425214 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:35.425222 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:35.425229 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:35.425237 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:35.425245 | orchestrator | 2025-09-08 00:31:35.425252 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-08 00:31:35.425260 | orchestrator | Monday 08 September 2025 00:31:20 +0000 (0:00:00.514) 0:06:47.340 ****** 2025-09-08 00:31:35.425268 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.425276 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:35.425283 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:35.425291 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:35.425299 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:35.425306 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:35.425314 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:35.425322 | orchestrator | 2025-09-08 00:31:35.425330 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-08 00:31:35.425338 | orchestrator | Monday 08 September 2025 00:31:28 +0000 (0:00:07.303) 0:06:54.643 ****** 2025-09-08 00:31:35.425345 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.425353 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:35.425361 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:35.425368 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:35.425376 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:35.425384 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:35.425391 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:35.425399 | orchestrator | 2025-09-08 00:31:35.425407 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-08 00:31:35.425415 | orchestrator | Monday 08 September 2025 00:31:29 +0000 (0:00:01.331) 0:06:55.975 ****** 2025-09-08 00:31:35.425422 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.425430 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:35.425438 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:35.425445 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:35.425453 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:35.425465 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:35.425473 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:35.425481 | orchestrator | 2025-09-08 00:31:35.425489 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-08 00:31:35.425496 | orchestrator | Monday 08 September 2025 00:31:31 +0000 (0:00:01.908) 0:06:57.883 ****** 2025-09-08 00:31:35.425504 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.425518 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:35.425526 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:35.425534 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:35.425541 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:35.425549 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:35.425557 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:35.425564 | orchestrator | 2025-09-08 00:31:35.425572 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-08 00:31:35.425580 | orchestrator | Monday 08 September 2025 00:31:33 +0000 (0:00:01.613) 0:06:59.496 ****** 2025-09-08 00:31:35.425588 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:35.425595 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:35.425603 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:35.425611 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:35.425619 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:35.425626 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:35.425634 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:35.425658 | orchestrator | 2025-09-08 00:31:35.425665 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-08 00:31:35.425673 | orchestrator | Monday 08 September 2025 00:31:33 +0000 (0:00:00.830) 0:07:00.327 ****** 2025-09-08 00:31:35.425681 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:35.425689 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:35.425696 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:35.425704 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:35.425712 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:35.425720 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:35.425727 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:35.425735 | orchestrator | 2025-09-08 00:31:35.425743 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-08 00:31:35.425750 | orchestrator | Monday 08 September 2025 00:31:34 +0000 (0:00:00.969) 0:07:01.296 ****** 2025-09-08 00:31:35.425758 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:35.425766 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:35.425774 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:35.425781 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:35.425789 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:35.425796 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:35.425804 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:35.425812 | orchestrator | 2025-09-08 00:31:35.425824 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-08 00:32:07.538425 | orchestrator | Monday 08 September 2025 00:31:35 +0000 (0:00:00.509) 0:07:01.806 ****** 2025-09-08 00:32:07.538560 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.538615 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.538628 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.538639 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.538650 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.538662 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.538674 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.538685 | orchestrator | 2025-09-08 00:32:07.538698 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-08 00:32:07.538710 | orchestrator | Monday 08 September 2025 00:31:35 +0000 (0:00:00.540) 0:07:02.346 ****** 2025-09-08 00:32:07.538721 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.538732 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.538743 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.538754 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.538765 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.538776 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.538786 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.538797 | orchestrator | 2025-09-08 00:32:07.538808 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-08 00:32:07.538819 | orchestrator | Monday 08 September 2025 00:31:36 +0000 (0:00:00.536) 0:07:02.883 ****** 2025-09-08 00:32:07.538860 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.538872 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.538882 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.538893 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.538904 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.538914 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.538925 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.538936 | orchestrator | 2025-09-08 00:32:07.538950 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-08 00:32:07.538963 | orchestrator | Monday 08 September 2025 00:31:37 +0000 (0:00:00.545) 0:07:03.428 ****** 2025-09-08 00:32:07.538975 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.538988 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.539000 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.539013 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.539026 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.539039 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.539052 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.539064 | orchestrator | 2025-09-08 00:32:07.539077 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-08 00:32:07.539090 | orchestrator | Monday 08 September 2025 00:31:42 +0000 (0:00:05.762) 0:07:09.190 ****** 2025-09-08 00:32:07.539103 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:07.539118 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:07.539131 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:07.539144 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:07.539157 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:07.539169 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:07.539183 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:07.539196 | orchestrator | 2025-09-08 00:32:07.539208 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-08 00:32:07.539221 | orchestrator | Monday 08 September 2025 00:31:43 +0000 (0:00:00.573) 0:07:09.763 ****** 2025-09-08 00:32:07.539252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:07.539269 | orchestrator | 2025-09-08 00:32:07.539283 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-08 00:32:07.539295 | orchestrator | Monday 08 September 2025 00:31:44 +0000 (0:00:00.809) 0:07:10.573 ****** 2025-09-08 00:32:07.539307 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.539317 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.539328 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.539339 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.539350 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.539361 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.539372 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.539382 | orchestrator | 2025-09-08 00:32:07.539393 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-08 00:32:07.539404 | orchestrator | Monday 08 September 2025 00:31:46 +0000 (0:00:02.300) 0:07:12.873 ****** 2025-09-08 00:32:07.539415 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.539426 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.539436 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.539447 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.539458 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.539468 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.539479 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.539490 | orchestrator | 2025-09-08 00:32:07.539501 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-08 00:32:07.539511 | orchestrator | Monday 08 September 2025 00:31:47 +0000 (0:00:01.194) 0:07:14.068 ****** 2025-09-08 00:32:07.539522 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.539533 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.539551 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.539562 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.539591 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.539602 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.539613 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.539624 | orchestrator | 2025-09-08 00:32:07.539634 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-08 00:32:07.539645 | orchestrator | Monday 08 September 2025 00:31:48 +0000 (0:00:00.851) 0:07:14.919 ****** 2025-09-08 00:32:07.539657 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:07.539669 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:07.539680 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:07.539711 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:07.539723 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:07.539733 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:07.539745 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:07.539755 | orchestrator | 2025-09-08 00:32:07.539766 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-08 00:32:07.539777 | orchestrator | Monday 08 September 2025 00:31:50 +0000 (0:00:01.736) 0:07:16.656 ****** 2025-09-08 00:32:07.539789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:07.539800 | orchestrator | 2025-09-08 00:32:07.539811 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-08 00:32:07.539822 | orchestrator | Monday 08 September 2025 00:31:51 +0000 (0:00:00.976) 0:07:17.633 ****** 2025-09-08 00:32:07.539833 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:07.539844 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:07.539855 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:07.539865 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:07.539876 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:07.539887 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:07.539898 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:07.539909 | orchestrator | 2025-09-08 00:32:07.539919 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-08 00:32:07.539930 | orchestrator | Monday 08 September 2025 00:31:59 +0000 (0:00:08.440) 0:07:26.074 ****** 2025-09-08 00:32:07.539941 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.539952 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.539963 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.539973 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.539984 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.539995 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.540006 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.540016 | orchestrator | 2025-09-08 00:32:07.540027 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-08 00:32:07.540038 | orchestrator | Monday 08 September 2025 00:32:01 +0000 (0:00:01.864) 0:07:27.938 ****** 2025-09-08 00:32:07.540049 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.540060 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.540078 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.540089 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.540099 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.540110 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.540121 | orchestrator | 2025-09-08 00:32:07.540132 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-08 00:32:07.540148 | orchestrator | Monday 08 September 2025 00:32:02 +0000 (0:00:01.278) 0:07:29.217 ****** 2025-09-08 00:32:07.540159 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:07.540170 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:07.540180 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:07.540191 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:07.540202 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:07.540213 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:07.540224 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:07.540234 | orchestrator | 2025-09-08 00:32:07.540245 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-08 00:32:07.540256 | orchestrator | 2025-09-08 00:32:07.540267 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-08 00:32:07.540278 | orchestrator | Monday 08 September 2025 00:32:04 +0000 (0:00:01.203) 0:07:30.420 ****** 2025-09-08 00:32:07.540289 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:07.540299 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:07.540310 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:07.540321 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:07.540332 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:07.540343 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:07.540353 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:07.540364 | orchestrator | 2025-09-08 00:32:07.540375 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-08 00:32:07.540386 | orchestrator | 2025-09-08 00:32:07.540397 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-08 00:32:07.540407 | orchestrator | Monday 08 September 2025 00:32:04 +0000 (0:00:00.492) 0:07:30.912 ****** 2025-09-08 00:32:07.540418 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:07.540429 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:07.540439 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:07.540450 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:07.540461 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:07.540471 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:07.540482 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:07.540493 | orchestrator | 2025-09-08 00:32:07.540503 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-08 00:32:07.540514 | orchestrator | Monday 08 September 2025 00:32:06 +0000 (0:00:01.563) 0:07:32.476 ****** 2025-09-08 00:32:07.540525 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:07.540536 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:07.540547 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:07.540557 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:07.540583 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:07.540595 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:07.540605 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:07.540616 | orchestrator | 2025-09-08 00:32:07.540627 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-08 00:32:07.540644 | orchestrator | Monday 08 September 2025 00:32:07 +0000 (0:00:01.433) 0:07:33.909 ****** 2025-09-08 00:32:30.552835 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:30.552958 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:30.552974 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:30.552986 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:30.552998 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:30.553009 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:30.553020 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:30.553032 | orchestrator | 2025-09-08 00:32:30.553071 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-08 00:32:30.553085 | orchestrator | Monday 08 September 2025 00:32:07 +0000 (0:00:00.481) 0:07:34.391 ****** 2025-09-08 00:32:30.553097 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:30.553109 | orchestrator | 2025-09-08 00:32:30.553120 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-08 00:32:30.553131 | orchestrator | Monday 08 September 2025 00:32:08 +0000 (0:00:00.997) 0:07:35.388 ****** 2025-09-08 00:32:30.553143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:30.553157 | orchestrator | 2025-09-08 00:32:30.553168 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-08 00:32:30.553179 | orchestrator | Monday 08 September 2025 00:32:09 +0000 (0:00:00.808) 0:07:36.196 ****** 2025-09-08 00:32:30.553190 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.553201 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.553211 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.553222 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.553233 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.553243 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.553254 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.553264 | orchestrator | 2025-09-08 00:32:30.553275 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-08 00:32:30.553286 | orchestrator | Monday 08 September 2025 00:32:17 +0000 (0:00:07.960) 0:07:44.157 ****** 2025-09-08 00:32:30.553296 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.553307 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.553318 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.553328 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.553338 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.553349 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.553360 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.553372 | orchestrator | 2025-09-08 00:32:30.553385 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-08 00:32:30.553398 | orchestrator | Monday 08 September 2025 00:32:18 +0000 (0:00:00.856) 0:07:45.013 ****** 2025-09-08 00:32:30.553411 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.553423 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.553436 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.553448 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.553461 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.553474 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.553488 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.553499 | orchestrator | 2025-09-08 00:32:30.553512 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-08 00:32:30.553550 | orchestrator | Monday 08 September 2025 00:32:20 +0000 (0:00:01.569) 0:07:46.583 ****** 2025-09-08 00:32:30.553563 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.553576 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.553588 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.553601 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.553614 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.553626 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.553638 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.553651 | orchestrator | 2025-09-08 00:32:30.553664 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-08 00:32:30.553677 | orchestrator | Monday 08 September 2025 00:32:21 +0000 (0:00:01.773) 0:07:48.357 ****** 2025-09-08 00:32:30.553689 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.553712 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.553724 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.553734 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.553745 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.553756 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.553766 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.553777 | orchestrator | 2025-09-08 00:32:30.553787 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-08 00:32:30.553798 | orchestrator | Monday 08 September 2025 00:32:23 +0000 (0:00:01.323) 0:07:49.681 ****** 2025-09-08 00:32:30.553809 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.553820 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.553830 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.553840 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.553851 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.553861 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.553872 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.553882 | orchestrator | 2025-09-08 00:32:30.553893 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-08 00:32:30.553904 | orchestrator | 2025-09-08 00:32:30.553914 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-08 00:32:30.553925 | orchestrator | Monday 08 September 2025 00:32:24 +0000 (0:00:01.299) 0:07:50.980 ****** 2025-09-08 00:32:30.553936 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:30.553947 | orchestrator | 2025-09-08 00:32:30.553958 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-08 00:32:30.553985 | orchestrator | Monday 08 September 2025 00:32:25 +0000 (0:00:00.798) 0:07:51.779 ****** 2025-09-08 00:32:30.553997 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:30.554009 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:30.554071 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:30.554082 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:30.554093 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:30.554104 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:30.554146 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:30.554159 | orchestrator | 2025-09-08 00:32:30.554170 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-08 00:32:30.554181 | orchestrator | Monday 08 September 2025 00:32:26 +0000 (0:00:00.823) 0:07:52.602 ****** 2025-09-08 00:32:30.554191 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.554250 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.554262 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.554273 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.554283 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.554294 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.554304 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.554315 | orchestrator | 2025-09-08 00:32:30.554326 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-08 00:32:30.554337 | orchestrator | Monday 08 September 2025 00:32:27 +0000 (0:00:01.341) 0:07:53.943 ****** 2025-09-08 00:32:30.554348 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:30.554358 | orchestrator | 2025-09-08 00:32:30.554369 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-08 00:32:30.554380 | orchestrator | Monday 08 September 2025 00:32:28 +0000 (0:00:00.856) 0:07:54.800 ****** 2025-09-08 00:32:30.554390 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:30.554401 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:30.554412 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:30.554422 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:30.554433 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:30.554452 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:30.554463 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:30.554473 | orchestrator | 2025-09-08 00:32:30.554484 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-08 00:32:30.554495 | orchestrator | Monday 08 September 2025 00:32:29 +0000 (0:00:00.835) 0:07:55.635 ****** 2025-09-08 00:32:30.554506 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:30.554516 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:30.554562 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:30.554573 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:30.554584 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:30.554595 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:30.554605 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:30.554616 | orchestrator | 2025-09-08 00:32:30.554627 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:32:30.554639 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-08 00:32:30.554650 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-08 00:32:30.554666 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:30.554677 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:30.554688 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:30.554699 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:30.554709 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:30.554720 | orchestrator | 2025-09-08 00:32:30.554731 | orchestrator | 2025-09-08 00:32:30.554742 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:32:30.554753 | orchestrator | Monday 08 September 2025 00:32:30 +0000 (0:00:01.285) 0:07:56.921 ****** 2025-09-08 00:32:30.554763 | orchestrator | =============================================================================== 2025-09-08 00:32:30.554774 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.57s 2025-09-08 00:32:30.554785 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.30s 2025-09-08 00:32:30.554796 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.39s 2025-09-08 00:32:30.554806 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.92s 2025-09-08 00:32:30.554817 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.59s 2025-09-08 00:32:30.554828 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.44s 2025-09-08 00:32:30.554839 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.50s 2025-09-08 00:32:30.554850 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.70s 2025-09-08 00:32:30.554860 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.61s 2025-09-08 00:32:30.554871 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.44s 2025-09-08 00:32:30.554891 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.36s 2025-09-08 00:32:31.027423 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.11s 2025-09-08 00:32:31.027515 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.96s 2025-09-08 00:32:31.027589 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.80s 2025-09-08 00:32:31.027602 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.67s 2025-09-08 00:32:31.027613 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.30s 2025-09-08 00:32:31.027624 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.20s 2025-09-08 00:32:31.027635 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.76s 2025-09-08 00:32:31.027646 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.75s 2025-09-08 00:32:31.027657 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.74s 2025-09-08 00:32:31.328400 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-08 00:32:31.328502 | orchestrator | + osism apply network 2025-09-08 00:32:43.930813 | orchestrator | 2025-09-08 00:32:43 | INFO  | Task 898dc447-6350-4c6b-a308-7f1b88f0413a (network) was prepared for execution. 2025-09-08 00:32:43.930935 | orchestrator | 2025-09-08 00:32:43 | INFO  | It takes a moment until task 898dc447-6350-4c6b-a308-7f1b88f0413a (network) has been started and output is visible here. 2025-09-08 00:33:12.658125 | orchestrator | 2025-09-08 00:33:12.658247 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-08 00:33:12.658266 | orchestrator | 2025-09-08 00:33:12.658279 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-08 00:33:12.658290 | orchestrator | Monday 08 September 2025 00:32:48 +0000 (0:00:00.272) 0:00:00.272 ****** 2025-09-08 00:33:12.658302 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.658315 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:12.658326 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:12.658337 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:12.658348 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:12.658359 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:12.658370 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:12.658381 | orchestrator | 2025-09-08 00:33:12.658392 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-08 00:33:12.658403 | orchestrator | Monday 08 September 2025 00:32:49 +0000 (0:00:00.725) 0:00:00.997 ****** 2025-09-08 00:33:12.658415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:33:12.658429 | orchestrator | 2025-09-08 00:33:12.658440 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-08 00:33:12.658512 | orchestrator | Monday 08 September 2025 00:32:50 +0000 (0:00:01.182) 0:00:02.180 ****** 2025-09-08 00:33:12.658524 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.658536 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:12.658547 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:12.658560 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:12.658572 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:12.658585 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:12.658597 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:12.658610 | orchestrator | 2025-09-08 00:33:12.658623 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-08 00:33:12.658635 | orchestrator | Monday 08 September 2025 00:32:52 +0000 (0:00:01.931) 0:00:04.111 ****** 2025-09-08 00:33:12.658648 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.658662 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:12.658675 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:12.658688 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:12.658701 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:12.658713 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:12.658725 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:12.658737 | orchestrator | 2025-09-08 00:33:12.658750 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-08 00:33:12.658789 | orchestrator | Monday 08 September 2025 00:32:53 +0000 (0:00:01.769) 0:00:05.881 ****** 2025-09-08 00:33:12.658803 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-08 00:33:12.658817 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-08 00:33:12.658830 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-08 00:33:12.658843 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-08 00:33:12.658855 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-08 00:33:12.658867 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-08 00:33:12.658879 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-08 00:33:12.658892 | orchestrator | 2025-09-08 00:33:12.658906 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-08 00:33:12.658917 | orchestrator | Monday 08 September 2025 00:32:54 +0000 (0:00:00.958) 0:00:06.839 ****** 2025-09-08 00:33:12.658928 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:33:12.658940 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:33:12.658950 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 00:33:12.658961 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 00:33:12.658971 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 00:33:12.658982 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 00:33:12.658993 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 00:33:12.659003 | orchestrator | 2025-09-08 00:33:12.659014 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-08 00:33:12.659025 | orchestrator | Monday 08 September 2025 00:32:58 +0000 (0:00:03.209) 0:00:10.048 ****** 2025-09-08 00:33:12.659035 | orchestrator | changed: [testbed-manager] 2025-09-08 00:33:12.659047 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:33:12.659057 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:33:12.659068 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:33:12.659078 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:33:12.659089 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:33:12.659099 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:33:12.659110 | orchestrator | 2025-09-08 00:33:12.659121 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-08 00:33:12.659132 | orchestrator | Monday 08 September 2025 00:32:59 +0000 (0:00:01.475) 0:00:11.524 ****** 2025-09-08 00:33:12.659142 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:33:12.659153 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 00:33:12.659164 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 00:33:12.659174 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:33:12.659185 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 00:33:12.659196 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 00:33:12.659206 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 00:33:12.659217 | orchestrator | 2025-09-08 00:33:12.659227 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-08 00:33:12.659238 | orchestrator | Monday 08 September 2025 00:33:01 +0000 (0:00:01.950) 0:00:13.474 ****** 2025-09-08 00:33:12.659249 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.659260 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:12.659270 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:12.659281 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:12.659292 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:12.659302 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:12.659313 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:12.659323 | orchestrator | 2025-09-08 00:33:12.659334 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-08 00:33:12.659363 | orchestrator | Monday 08 September 2025 00:33:02 +0000 (0:00:01.057) 0:00:14.532 ****** 2025-09-08 00:33:12.659375 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:12.659386 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:12.659397 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:12.659416 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:12.659428 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:12.659438 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:12.659467 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:12.659478 | orchestrator | 2025-09-08 00:33:12.659489 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-08 00:33:12.659500 | orchestrator | Monday 08 September 2025 00:33:03 +0000 (0:00:00.656) 0:00:15.188 ****** 2025-09-08 00:33:12.659511 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:12.659521 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.659532 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:12.659543 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:12.659554 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:12.659564 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:12.659575 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:12.659585 | orchestrator | 2025-09-08 00:33:12.659596 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-08 00:33:12.659607 | orchestrator | Monday 08 September 2025 00:33:05 +0000 (0:00:02.447) 0:00:17.636 ****** 2025-09-08 00:33:12.659618 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:12.659628 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:12.659639 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:12.659650 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:12.659661 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:12.659671 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:12.659696 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-08 00:33:12.659708 | orchestrator | 2025-09-08 00:33:12.659719 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-08 00:33:12.659730 | orchestrator | Monday 08 September 2025 00:33:06 +0000 (0:00:00.942) 0:00:18.579 ****** 2025-09-08 00:33:12.659741 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.659751 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:33:12.659762 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:33:12.659773 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:33:12.659783 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:33:12.659794 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:33:12.659805 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:33:12.659816 | orchestrator | 2025-09-08 00:33:12.659826 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-08 00:33:12.659837 | orchestrator | Monday 08 September 2025 00:33:08 +0000 (0:00:01.628) 0:00:20.207 ****** 2025-09-08 00:33:12.659848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:33:12.659861 | orchestrator | 2025-09-08 00:33:12.659872 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-08 00:33:12.659883 | orchestrator | Monday 08 September 2025 00:33:09 +0000 (0:00:01.297) 0:00:21.504 ****** 2025-09-08 00:33:12.659894 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.659905 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:12.659916 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:12.659926 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:12.659937 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:12.659948 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:12.659958 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:12.659969 | orchestrator | 2025-09-08 00:33:12.659980 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-08 00:33:12.659991 | orchestrator | Monday 08 September 2025 00:33:10 +0000 (0:00:01.005) 0:00:22.510 ****** 2025-09-08 00:33:12.660001 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:12.660012 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:12.660023 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:12.660041 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:12.660051 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:12.660062 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:12.660073 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:12.660083 | orchestrator | 2025-09-08 00:33:12.660094 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-08 00:33:12.660104 | orchestrator | Monday 08 September 2025 00:33:11 +0000 (0:00:00.869) 0:00:23.379 ****** 2025-09-08 00:33:12.660115 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:12.660126 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:12.660137 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:12.660147 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:12.660158 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:12.660168 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:12.660179 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:12.660190 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:12.660200 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:12.660211 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:12.660221 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:12.660232 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:12.660242 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:12.660253 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:12.660264 | orchestrator | 2025-09-08 00:33:12.660282 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-08 00:33:29.934284 | orchestrator | Monday 08 September 2025 00:33:12 +0000 (0:00:01.192) 0:00:24.571 ****** 2025-09-08 00:33:29.934442 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:29.934458 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:29.934468 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:29.934477 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:29.934487 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:29.934495 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:29.934505 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:29.934515 | orchestrator | 2025-09-08 00:33:29.934525 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-08 00:33:29.934535 | orchestrator | Monday 08 September 2025 00:33:13 +0000 (0:00:00.669) 0:00:25.241 ****** 2025-09-08 00:33:29.934545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2025-09-08 00:33:29.934557 | orchestrator | 2025-09-08 00:33:29.934566 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-08 00:33:29.934575 | orchestrator | Monday 08 September 2025 00:33:18 +0000 (0:00:04.853) 0:00:30.094 ****** 2025-09-08 00:33:29.934600 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934655 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934779 | orchestrator | 2025-09-08 00:33:29.934788 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-08 00:33:29.934797 | orchestrator | Monday 08 September 2025 00:33:24 +0000 (0:00:05.916) 0:00:36.011 ****** 2025-09-08 00:33:29.934806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934829 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934861 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:29.934912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:29.934955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:36.132919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:36.133033 | orchestrator | 2025-09-08 00:33:36.133049 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-08 00:33:36.133064 | orchestrator | Monday 08 September 2025 00:33:29 +0000 (0:00:05.833) 0:00:41.844 ****** 2025-09-08 00:33:36.133100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:33:36.133112 | orchestrator | 2025-09-08 00:33:36.133124 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-08 00:33:36.133135 | orchestrator | Monday 08 September 2025 00:33:31 +0000 (0:00:01.295) 0:00:43.139 ****** 2025-09-08 00:33:36.133147 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:36.133159 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:36.133170 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:36.133181 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:36.133192 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:36.133203 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:36.133214 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:36.133225 | orchestrator | 2025-09-08 00:33:36.133236 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-08 00:33:36.133247 | orchestrator | Monday 08 September 2025 00:33:32 +0000 (0:00:01.168) 0:00:44.308 ****** 2025-09-08 00:33:36.133258 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:36.133270 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:36.133281 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:36.133292 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:36.133302 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:36.133313 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:36.133324 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:36.133335 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:36.133346 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:36.133357 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:36.133368 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:36.133379 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:36.133389 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:36.133431 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:36.133444 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:36.133457 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:36.133488 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:36.133501 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:36.133513 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:36.133525 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:36.133538 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:36.133550 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:36.133562 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:36.133574 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:36.133586 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:36.133598 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:36.133612 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:36.133635 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:36.133649 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:36.133662 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:36.133674 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:36.133687 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:36.133700 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:36.133712 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:36.133723 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:36.133736 | orchestrator | 2025-09-08 00:33:36.133749 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-08 00:33:36.133780 | orchestrator | Monday 08 September 2025 00:33:34 +0000 (0:00:02.015) 0:00:46.324 ****** 2025-09-08 00:33:36.133795 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:36.133807 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:36.133818 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:36.133829 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:36.133840 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:36.133851 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:36.133862 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:36.133873 | orchestrator | 2025-09-08 00:33:36.133884 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-08 00:33:36.133895 | orchestrator | Monday 08 September 2025 00:33:35 +0000 (0:00:00.634) 0:00:46.958 ****** 2025-09-08 00:33:36.133906 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:36.133917 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:36.133928 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:36.133938 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:36.133949 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:36.133960 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:36.133971 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:36.133982 | orchestrator | 2025-09-08 00:33:36.133993 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:33:36.134010 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:33:36.134073 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:36.134085 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:36.134096 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:36.134106 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:36.134117 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:36.134128 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:36.134138 | orchestrator | 2025-09-08 00:33:36.134149 | orchestrator | 2025-09-08 00:33:36.134160 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:33:36.134171 | orchestrator | Monday 08 September 2025 00:33:35 +0000 (0:00:00.719) 0:00:47.678 ****** 2025-09-08 00:33:36.134182 | orchestrator | =============================================================================== 2025-09-08 00:33:36.134200 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.92s 2025-09-08 00:33:36.134211 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.83s 2025-09-08 00:33:36.134221 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.85s 2025-09-08 00:33:36.134232 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.21s 2025-09-08 00:33:36.134243 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.45s 2025-09-08 00:33:36.134253 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-09-08 00:33:36.134264 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2025-09-08 00:33:36.134274 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.93s 2025-09-08 00:33:36.134285 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2025-09-08 00:33:36.134296 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2025-09-08 00:33:36.134306 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2025-09-08 00:33:36.134317 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2025-09-08 00:33:36.134328 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.30s 2025-09-08 00:33:36.134338 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2025-09-08 00:33:36.134349 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-09-08 00:33:36.134359 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2025-09-08 00:33:36.134370 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.06s 2025-09-08 00:33:36.134381 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2025-09-08 00:33:36.134392 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-09-08 00:33:36.134421 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.94s 2025-09-08 00:33:36.433208 | orchestrator | + osism apply wireguard 2025-09-08 00:33:48.441819 | orchestrator | 2025-09-08 00:33:48 | INFO  | Task 021e41d4-67b5-4691-9a4e-9f19049f7231 (wireguard) was prepared for execution. 2025-09-08 00:33:48.441939 | orchestrator | 2025-09-08 00:33:48 | INFO  | It takes a moment until task 021e41d4-67b5-4691-9a4e-9f19049f7231 (wireguard) has been started and output is visible here. 2025-09-08 00:34:08.178804 | orchestrator | 2025-09-08 00:34:08.178930 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-08 00:34:08.178947 | orchestrator | 2025-09-08 00:34:08.178959 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-08 00:34:08.178971 | orchestrator | Monday 08 September 2025 00:33:52 +0000 (0:00:00.226) 0:00:00.226 ****** 2025-09-08 00:34:08.178982 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:08.178995 | orchestrator | 2025-09-08 00:34:08.179006 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-08 00:34:08.179017 | orchestrator | Monday 08 September 2025 00:33:54 +0000 (0:00:01.544) 0:00:01.770 ****** 2025-09-08 00:34:08.179028 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:08.179039 | orchestrator | 2025-09-08 00:34:08.179050 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-08 00:34:08.179061 | orchestrator | Monday 08 September 2025 00:34:00 +0000 (0:00:06.528) 0:00:08.298 ****** 2025-09-08 00:34:08.179072 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:08.179083 | orchestrator | 2025-09-08 00:34:08.179093 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-08 00:34:08.179104 | orchestrator | Monday 08 September 2025 00:34:01 +0000 (0:00:00.560) 0:00:08.859 ****** 2025-09-08 00:34:08.179134 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:08.179172 | orchestrator | 2025-09-08 00:34:08.179183 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-08 00:34:08.179195 | orchestrator | Monday 08 September 2025 00:34:01 +0000 (0:00:00.422) 0:00:09.282 ****** 2025-09-08 00:34:08.179206 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:08.179217 | orchestrator | 2025-09-08 00:34:08.179228 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-08 00:34:08.179239 | orchestrator | Monday 08 September 2025 00:34:02 +0000 (0:00:00.522) 0:00:09.805 ****** 2025-09-08 00:34:08.179249 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:08.179260 | orchestrator | 2025-09-08 00:34:08.179271 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-08 00:34:08.179282 | orchestrator | Monday 08 September 2025 00:34:02 +0000 (0:00:00.512) 0:00:10.318 ****** 2025-09-08 00:34:08.179293 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:08.179303 | orchestrator | 2025-09-08 00:34:08.179314 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-08 00:34:08.179325 | orchestrator | Monday 08 September 2025 00:34:03 +0000 (0:00:00.419) 0:00:10.738 ****** 2025-09-08 00:34:08.179337 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:08.179383 | orchestrator | 2025-09-08 00:34:08.179397 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-08 00:34:08.179410 | orchestrator | Monday 08 September 2025 00:34:04 +0000 (0:00:01.191) 0:00:11.929 ****** 2025-09-08 00:34:08.179423 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:34:08.179437 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:08.179449 | orchestrator | 2025-09-08 00:34:08.179462 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-08 00:34:08.179475 | orchestrator | Monday 08 September 2025 00:34:05 +0000 (0:00:00.956) 0:00:12.885 ****** 2025-09-08 00:34:08.179488 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:08.179501 | orchestrator | 2025-09-08 00:34:08.179514 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-08 00:34:08.179527 | orchestrator | Monday 08 September 2025 00:34:06 +0000 (0:00:01.716) 0:00:14.601 ****** 2025-09-08 00:34:08.179540 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:08.179553 | orchestrator | 2025-09-08 00:34:08.179566 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:34:08.179579 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:34:08.179593 | orchestrator | 2025-09-08 00:34:08.179606 | orchestrator | 2025-09-08 00:34:08.179619 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:34:08.179633 | orchestrator | Monday 08 September 2025 00:34:07 +0000 (0:00:00.957) 0:00:15.558 ****** 2025-09-08 00:34:08.179646 | orchestrator | =============================================================================== 2025-09-08 00:34:08.179658 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.53s 2025-09-08 00:34:08.179672 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.72s 2025-09-08 00:34:08.179685 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.54s 2025-09-08 00:34:08.179698 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-09-08 00:34:08.179709 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-09-08 00:34:08.179720 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2025-09-08 00:34:08.179730 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-08 00:34:08.179741 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-09-08 00:34:08.179752 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-09-08 00:34:08.179763 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-09-08 00:34:08.179782 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-09-08 00:34:08.485813 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-08 00:34:08.527804 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-08 00:34:08.527850 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-08 00:34:08.601644 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 202 0 --:--:-- --:--:-- --:--:-- 205 2025-09-08 00:34:08.616636 | orchestrator | + osism apply --environment custom workarounds 2025-09-08 00:34:10.526309 | orchestrator | 2025-09-08 00:34:10 | INFO  | Trying to run play workarounds in environment custom 2025-09-08 00:34:20.716201 | orchestrator | 2025-09-08 00:34:20 | INFO  | Task a195c022-c8b9-44ac-8a6f-101d7e828121 (workarounds) was prepared for execution. 2025-09-08 00:34:20.716375 | orchestrator | 2025-09-08 00:34:20 | INFO  | It takes a moment until task a195c022-c8b9-44ac-8a6f-101d7e828121 (workarounds) has been started and output is visible here. 2025-09-08 00:34:45.793329 | orchestrator | 2025-09-08 00:34:45.793453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:34:45.793470 | orchestrator | 2025-09-08 00:34:45.793483 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-08 00:34:45.793495 | orchestrator | Monday 08 September 2025 00:34:24 +0000 (0:00:00.145) 0:00:00.145 ****** 2025-09-08 00:34:45.793507 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-08 00:34:45.793518 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-08 00:34:45.793538 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-08 00:34:45.793550 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-08 00:34:45.793561 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-08 00:34:45.793572 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-08 00:34:45.793583 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-08 00:34:45.793594 | orchestrator | 2025-09-08 00:34:45.793604 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-08 00:34:45.793615 | orchestrator | 2025-09-08 00:34:45.793626 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-08 00:34:45.793637 | orchestrator | Monday 08 September 2025 00:34:25 +0000 (0:00:00.810) 0:00:00.956 ****** 2025-09-08 00:34:45.793648 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:45.793660 | orchestrator | 2025-09-08 00:34:45.793671 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-08 00:34:45.793682 | orchestrator | 2025-09-08 00:34:45.793693 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-08 00:34:45.793703 | orchestrator | Monday 08 September 2025 00:34:27 +0000 (0:00:02.495) 0:00:03.452 ****** 2025-09-08 00:34:45.793714 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:34:45.793725 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:34:45.793736 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:34:45.793747 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:34:45.793758 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:34:45.793769 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:34:45.793779 | orchestrator | 2025-09-08 00:34:45.793792 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-08 00:34:45.793803 | orchestrator | 2025-09-08 00:34:45.793814 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-08 00:34:45.793825 | orchestrator | Monday 08 September 2025 00:34:29 +0000 (0:00:01.760) 0:00:05.212 ****** 2025-09-08 00:34:45.793836 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:45.793848 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:45.793877 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:45.793888 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:45.793899 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:45.793910 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:45.793921 | orchestrator | 2025-09-08 00:34:45.793932 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-08 00:34:45.793943 | orchestrator | Monday 08 September 2025 00:34:31 +0000 (0:00:01.503) 0:00:06.715 ****** 2025-09-08 00:34:45.793953 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:45.793964 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:45.793975 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:45.793986 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:45.793997 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:45.794007 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:45.794096 | orchestrator | 2025-09-08 00:34:45.794111 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-08 00:34:45.794122 | orchestrator | Monday 08 September 2025 00:34:35 +0000 (0:00:03.847) 0:00:10.563 ****** 2025-09-08 00:34:45.794133 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:34:45.794144 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:34:45.794155 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:34:45.794166 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:34:45.794176 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:34:45.794187 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:34:45.794198 | orchestrator | 2025-09-08 00:34:45.794209 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-08 00:34:45.794220 | orchestrator | 2025-09-08 00:34:45.794231 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-08 00:34:45.794242 | orchestrator | Monday 08 September 2025 00:34:35 +0000 (0:00:00.699) 0:00:11.262 ****** 2025-09-08 00:34:45.794253 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:45.794263 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:45.794304 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:45.794315 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:45.794326 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:45.794337 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:45.794348 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:45.794359 | orchestrator | 2025-09-08 00:34:45.794369 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-08 00:34:45.794380 | orchestrator | Monday 08 September 2025 00:34:37 +0000 (0:00:01.654) 0:00:12.916 ****** 2025-09-08 00:34:45.794391 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:45.794402 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:45.794413 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:45.794424 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:45.794435 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:45.794445 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:45.794474 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:45.794485 | orchestrator | 2025-09-08 00:34:45.794496 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-08 00:34:45.794507 | orchestrator | Monday 08 September 2025 00:34:39 +0000 (0:00:01.624) 0:00:14.541 ****** 2025-09-08 00:34:45.794519 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:34:45.794530 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:34:45.794541 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:34:45.794552 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:45.794562 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:34:45.794583 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:34:45.794594 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:34:45.794604 | orchestrator | 2025-09-08 00:34:45.794622 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-08 00:34:45.794633 | orchestrator | Monday 08 September 2025 00:34:40 +0000 (0:00:01.493) 0:00:16.035 ****** 2025-09-08 00:34:45.794644 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:45.794655 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:45.794665 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:45.794676 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:45.794687 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:45.794697 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:45.794708 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:45.794719 | orchestrator | 2025-09-08 00:34:45.794729 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-08 00:34:45.794740 | orchestrator | Monday 08 September 2025 00:34:42 +0000 (0:00:01.753) 0:00:17.788 ****** 2025-09-08 00:34:45.794751 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:34:45.794762 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:34:45.794773 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:34:45.794783 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:34:45.794794 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:34:45.794805 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:34:45.794815 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:34:45.794826 | orchestrator | 2025-09-08 00:34:45.794837 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-08 00:34:45.794848 | orchestrator | 2025-09-08 00:34:45.794858 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-08 00:34:45.794869 | orchestrator | Monday 08 September 2025 00:34:42 +0000 (0:00:00.629) 0:00:18.417 ****** 2025-09-08 00:34:45.794880 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:45.794891 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:34:45.794901 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:34:45.794912 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:34:45.794923 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:34:45.794933 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:34:45.794944 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:34:45.794955 | orchestrator | 2025-09-08 00:34:45.794966 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:34:45.794978 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:34:45.794990 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:45.795001 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:45.795012 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:45.795023 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:45.795034 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:45.795044 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:45.795055 | orchestrator | 2025-09-08 00:34:45.795066 | orchestrator | 2025-09-08 00:34:45.795077 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:34:45.795088 | orchestrator | Monday 08 September 2025 00:34:45 +0000 (0:00:02.800) 0:00:21.218 ****** 2025-09-08 00:34:45.795106 | orchestrator | =============================================================================== 2025-09-08 00:34:45.795117 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2025-09-08 00:34:45.795128 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2025-09-08 00:34:45.795138 | orchestrator | Apply netplan configuration --------------------------------------------- 2.50s 2025-09-08 00:34:45.795149 | orchestrator | Apply netplan configuration --------------------------------------------- 1.76s 2025-09-08 00:34:45.795160 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.75s 2025-09-08 00:34:45.795171 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-09-08 00:34:45.795182 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-09-08 00:34:45.795193 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2025-09-08 00:34:45.795203 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-09-08 00:34:45.795214 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.81s 2025-09-08 00:34:45.795225 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2025-09-08 00:34:45.795242 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-08 00:34:46.447750 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-08 00:34:58.539370 | orchestrator | 2025-09-08 00:34:58 | INFO  | Task c185cc4d-b044-4cdb-820e-2b7c9c9a25eb (reboot) was prepared for execution. 2025-09-08 00:34:58.539513 | orchestrator | 2025-09-08 00:34:58 | INFO  | It takes a moment until task c185cc4d-b044-4cdb-820e-2b7c9c9a25eb (reboot) has been started and output is visible here. 2025-09-08 00:35:08.453197 | orchestrator | 2025-09-08 00:35:08.453344 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:08.453362 | orchestrator | 2025-09-08 00:35:08.453374 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:08.453386 | orchestrator | Monday 08 September 2025 00:35:02 +0000 (0:00:00.212) 0:00:00.212 ****** 2025-09-08 00:35:08.453397 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:35:08.453409 | orchestrator | 2025-09-08 00:35:08.453420 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:08.453431 | orchestrator | Monday 08 September 2025 00:35:02 +0000 (0:00:00.103) 0:00:00.316 ****** 2025-09-08 00:35:08.453442 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:35:08.453453 | orchestrator | 2025-09-08 00:35:08.453464 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:08.453475 | orchestrator | Monday 08 September 2025 00:35:03 +0000 (0:00:00.951) 0:00:01.267 ****** 2025-09-08 00:35:08.453486 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:35:08.453497 | orchestrator | 2025-09-08 00:35:08.453509 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:08.453520 | orchestrator | 2025-09-08 00:35:08.453531 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:08.453542 | orchestrator | Monday 08 September 2025 00:35:03 +0000 (0:00:00.109) 0:00:01.377 ****** 2025-09-08 00:35:08.453553 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:35:08.453564 | orchestrator | 2025-09-08 00:35:08.453575 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:08.453586 | orchestrator | Monday 08 September 2025 00:35:03 +0000 (0:00:00.098) 0:00:01.476 ****** 2025-09-08 00:35:08.453597 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:35:08.453608 | orchestrator | 2025-09-08 00:35:08.453619 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:08.453631 | orchestrator | Monday 08 September 2025 00:35:04 +0000 (0:00:00.656) 0:00:02.132 ****** 2025-09-08 00:35:08.453642 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:35:08.453678 | orchestrator | 2025-09-08 00:35:08.453690 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:08.453701 | orchestrator | 2025-09-08 00:35:08.453711 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:08.453722 | orchestrator | Monday 08 September 2025 00:35:04 +0000 (0:00:00.115) 0:00:02.248 ****** 2025-09-08 00:35:08.453733 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:35:08.453745 | orchestrator | 2025-09-08 00:35:08.453759 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:08.453772 | orchestrator | Monday 08 September 2025 00:35:04 +0000 (0:00:00.203) 0:00:02.451 ****** 2025-09-08 00:35:08.453785 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:35:08.453799 | orchestrator | 2025-09-08 00:35:08.453812 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:08.453825 | orchestrator | Monday 08 September 2025 00:35:05 +0000 (0:00:00.676) 0:00:03.128 ****** 2025-09-08 00:35:08.453838 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:35:08.453851 | orchestrator | 2025-09-08 00:35:08.453864 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:08.453876 | orchestrator | 2025-09-08 00:35:08.453889 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:08.453902 | orchestrator | Monday 08 September 2025 00:35:05 +0000 (0:00:00.137) 0:00:03.265 ****** 2025-09-08 00:35:08.453915 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:35:08.453929 | orchestrator | 2025-09-08 00:35:08.453943 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:08.453956 | orchestrator | Monday 08 September 2025 00:35:05 +0000 (0:00:00.106) 0:00:03.372 ****** 2025-09-08 00:35:08.453970 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:35:08.453983 | orchestrator | 2025-09-08 00:35:08.453996 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:08.454008 | orchestrator | Monday 08 September 2025 00:35:06 +0000 (0:00:00.673) 0:00:04.046 ****** 2025-09-08 00:35:08.454078 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:35:08.454092 | orchestrator | 2025-09-08 00:35:08.454106 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:08.454117 | orchestrator | 2025-09-08 00:35:08.454127 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:08.454139 | orchestrator | Monday 08 September 2025 00:35:06 +0000 (0:00:00.113) 0:00:04.159 ****** 2025-09-08 00:35:08.454150 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:35:08.454160 | orchestrator | 2025-09-08 00:35:08.454172 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:08.454183 | orchestrator | Monday 08 September 2025 00:35:06 +0000 (0:00:00.104) 0:00:04.264 ****** 2025-09-08 00:35:08.454194 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:35:08.454205 | orchestrator | 2025-09-08 00:35:08.454216 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:08.454226 | orchestrator | Monday 08 September 2025 00:35:07 +0000 (0:00:00.689) 0:00:04.953 ****** 2025-09-08 00:35:08.454256 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:35:08.454267 | orchestrator | 2025-09-08 00:35:08.454278 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:08.454289 | orchestrator | 2025-09-08 00:35:08.454300 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:08.454311 | orchestrator | Monday 08 September 2025 00:35:07 +0000 (0:00:00.120) 0:00:05.074 ****** 2025-09-08 00:35:08.454322 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:35:08.454333 | orchestrator | 2025-09-08 00:35:08.454344 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:08.454355 | orchestrator | Monday 08 September 2025 00:35:07 +0000 (0:00:00.099) 0:00:05.173 ****** 2025-09-08 00:35:08.454365 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:35:08.454376 | orchestrator | 2025-09-08 00:35:08.454387 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:08.454407 | orchestrator | Monday 08 September 2025 00:35:08 +0000 (0:00:00.635) 0:00:05.809 ****** 2025-09-08 00:35:08.454449 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:35:08.454461 | orchestrator | 2025-09-08 00:35:08.454472 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:35:08.454484 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:08.454496 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:08.454507 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:08.454518 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:08.454528 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:08.454539 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:08.454550 | orchestrator | 2025-09-08 00:35:08.454561 | orchestrator | 2025-09-08 00:35:08.454572 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:35:08.454582 | orchestrator | Monday 08 September 2025 00:35:08 +0000 (0:00:00.043) 0:00:05.852 ****** 2025-09-08 00:35:08.454593 | orchestrator | =============================================================================== 2025-09-08 00:35:08.454604 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2025-09-08 00:35:08.454619 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.72s 2025-09-08 00:35:08.454630 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2025-09-08 00:35:08.801631 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-08 00:35:20.794481 | orchestrator | 2025-09-08 00:35:20 | INFO  | Task d5f13899-6d03-4f43-99cf-92e07e307a1e (wait-for-connection) was prepared for execution. 2025-09-08 00:35:20.794608 | orchestrator | 2025-09-08 00:35:20 | INFO  | It takes a moment until task d5f13899-6d03-4f43-99cf-92e07e307a1e (wait-for-connection) has been started and output is visible here. 2025-09-08 00:35:37.101778 | orchestrator | 2025-09-08 00:35:37.101933 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-08 00:35:37.101961 | orchestrator | 2025-09-08 00:35:37.101979 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-08 00:35:37.101997 | orchestrator | Monday 08 September 2025 00:35:25 +0000 (0:00:00.242) 0:00:00.242 ****** 2025-09-08 00:35:37.102015 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:35:37.102105 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:35:37.102128 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:35:37.102150 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:35:37.102166 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:35:37.102222 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:35:37.102239 | orchestrator | 2025-09-08 00:35:37.102256 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:35:37.102274 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:37.102293 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:37.102309 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:37.102365 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:37.102383 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:37.102396 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:37.102409 | orchestrator | 2025-09-08 00:35:37.102422 | orchestrator | 2025-09-08 00:35:37.102436 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:35:37.102450 | orchestrator | Monday 08 September 2025 00:35:36 +0000 (0:00:11.622) 0:00:11.864 ****** 2025-09-08 00:35:37.102464 | orchestrator | =============================================================================== 2025-09-08 00:35:37.102477 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2025-09-08 00:35:37.403442 | orchestrator | + osism apply hddtemp 2025-09-08 00:35:49.364098 | orchestrator | 2025-09-08 00:35:49 | INFO  | Task 959e010c-d0ff-4221-892b-337016eca83f (hddtemp) was prepared for execution. 2025-09-08 00:35:49.364271 | orchestrator | 2025-09-08 00:35:49 | INFO  | It takes a moment until task 959e010c-d0ff-4221-892b-337016eca83f (hddtemp) has been started and output is visible here. 2025-09-08 00:36:16.971046 | orchestrator | 2025-09-08 00:36:16.971199 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-08 00:36:16.971217 | orchestrator | 2025-09-08 00:36:16.971246 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-08 00:36:16.971260 | orchestrator | Monday 08 September 2025 00:35:53 +0000 (0:00:00.261) 0:00:00.261 ****** 2025-09-08 00:36:16.971271 | orchestrator | ok: [testbed-manager] 2025-09-08 00:36:16.971284 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:36:16.971295 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:36:16.971306 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:36:16.971318 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:36:16.971329 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:36:16.971340 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:36:16.971351 | orchestrator | 2025-09-08 00:36:16.971363 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-08 00:36:16.971374 | orchestrator | Monday 08 September 2025 00:35:54 +0000 (0:00:00.705) 0:00:00.966 ****** 2025-09-08 00:36:16.971386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:36:16.971400 | orchestrator | 2025-09-08 00:36:16.971412 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-08 00:36:16.971423 | orchestrator | Monday 08 September 2025 00:35:55 +0000 (0:00:01.214) 0:00:02.180 ****** 2025-09-08 00:36:16.971434 | orchestrator | ok: [testbed-manager] 2025-09-08 00:36:16.971445 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:36:16.971456 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:36:16.971467 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:36:16.971478 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:36:16.971489 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:36:16.971500 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:36:16.971511 | orchestrator | 2025-09-08 00:36:16.971522 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-08 00:36:16.971533 | orchestrator | Monday 08 September 2025 00:35:57 +0000 (0:00:01.950) 0:00:04.131 ****** 2025-09-08 00:36:16.971545 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:16.971557 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:36:16.971570 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:36:16.971584 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:36:16.971597 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:36:16.971701 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:36:16.971715 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:36:16.971728 | orchestrator | 2025-09-08 00:36:16.971740 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-08 00:36:16.971754 | orchestrator | Monday 08 September 2025 00:35:58 +0000 (0:00:01.143) 0:00:05.275 ****** 2025-09-08 00:36:16.971767 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:36:16.971780 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:36:16.971793 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:36:16.971806 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:36:16.971819 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:36:16.971832 | orchestrator | ok: [testbed-manager] 2025-09-08 00:36:16.971844 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:36:16.971857 | orchestrator | 2025-09-08 00:36:16.971870 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-08 00:36:16.971883 | orchestrator | Monday 08 September 2025 00:35:59 +0000 (0:00:01.117) 0:00:06.392 ****** 2025-09-08 00:36:16.971895 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:36:16.971909 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:36:16.971922 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:36:16.971933 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:16.971944 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:36:16.971955 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:36:16.971966 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:36:16.971977 | orchestrator | 2025-09-08 00:36:16.971988 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-08 00:36:16.971999 | orchestrator | Monday 08 September 2025 00:36:00 +0000 (0:00:00.804) 0:00:07.197 ****** 2025-09-08 00:36:16.972010 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:16.972021 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:36:16.972032 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:36:16.972043 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:36:16.972054 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:36:16.972065 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:36:16.972075 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:36:16.972086 | orchestrator | 2025-09-08 00:36:16.972097 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-08 00:36:16.972108 | orchestrator | Monday 08 September 2025 00:36:12 +0000 (0:00:12.070) 0:00:19.268 ****** 2025-09-08 00:36:16.972143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:36:16.972154 | orchestrator | 2025-09-08 00:36:16.972165 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-08 00:36:16.972176 | orchestrator | Monday 08 September 2025 00:36:13 +0000 (0:00:01.395) 0:00:20.664 ****** 2025-09-08 00:36:16.972187 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:16.972198 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:36:16.972209 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:36:16.972220 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:36:16.972231 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:36:16.972241 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:36:16.972252 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:36:16.972263 | orchestrator | 2025-09-08 00:36:16.972274 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:36:16.972285 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:36:16.972317 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:16.972335 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:16.972360 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:16.972372 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:16.972383 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:16.972393 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:16.972404 | orchestrator | 2025-09-08 00:36:16.972415 | orchestrator | 2025-09-08 00:36:16.972426 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:36:16.972437 | orchestrator | Monday 08 September 2025 00:36:16 +0000 (0:00:02.717) 0:00:23.381 ****** 2025-09-08 00:36:16.972448 | orchestrator | =============================================================================== 2025-09-08 00:36:16.972459 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.07s 2025-09-08 00:36:16.972470 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.72s 2025-09-08 00:36:16.972480 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.95s 2025-09-08 00:36:16.972491 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-09-08 00:36:16.972502 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-09-08 00:36:16.972513 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2025-09-08 00:36:16.972523 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2025-09-08 00:36:16.972534 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2025-09-08 00:36:16.972545 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-09-08 00:36:17.244019 | orchestrator | ++ semver latest 7.1.1 2025-09-08 00:36:17.295136 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-08 00:36:17.295178 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-08 00:36:17.295191 | orchestrator | + sudo systemctl restart manager.service 2025-09-08 00:36:56.216992 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-08 00:36:56.217177 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-08 00:36:56.217197 | orchestrator | + local max_attempts=60 2025-09-08 00:36:56.217210 | orchestrator | + local name=ceph-ansible 2025-09-08 00:36:56.217222 | orchestrator | + local attempt_num=1 2025-09-08 00:36:56.217233 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:36:56.247675 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:36:56.247730 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:36:56.247744 | orchestrator | + sleep 5 2025-09-08 00:37:01.250988 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:01.285792 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:01.285823 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:01.285836 | orchestrator | + sleep 5 2025-09-08 00:37:06.288598 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:06.313916 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:06.313954 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:06.313967 | orchestrator | + sleep 5 2025-09-08 00:37:11.318926 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:11.361513 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:11.361570 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:11.361584 | orchestrator | + sleep 5 2025-09-08 00:37:16.367392 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:16.410514 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:16.410807 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:16.410864 | orchestrator | + sleep 5 2025-09-08 00:37:21.415892 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:21.456280 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:21.456324 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:21.456337 | orchestrator | + sleep 5 2025-09-08 00:37:26.461916 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:26.506831 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:26.506897 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:26.506912 | orchestrator | + sleep 5 2025-09-08 00:37:31.512425 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:31.552383 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:31.552419 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:31.552432 | orchestrator | + sleep 5 2025-09-08 00:37:36.581474 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:36.627288 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:36.627334 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:36.627341 | orchestrator | + sleep 5 2025-09-08 00:37:41.631535 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:41.673815 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:41.674314 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:41.674336 | orchestrator | + sleep 5 2025-09-08 00:37:46.678306 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:46.723465 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:46.723501 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:46.723513 | orchestrator | + sleep 5 2025-09-08 00:37:51.729277 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:51.771046 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:51.771136 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:51.771151 | orchestrator | + sleep 5 2025-09-08 00:37:56.777212 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:56.815622 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:56.816139 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:56.816160 | orchestrator | + sleep 5 2025-09-08 00:38:01.822245 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:38:01.862914 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:38:01.863070 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-08 00:38:01.863088 | orchestrator | + local max_attempts=60 2025-09-08 00:38:01.863204 | orchestrator | + local name=kolla-ansible 2025-09-08 00:38:01.863802 | orchestrator | + local attempt_num=1 2025-09-08 00:38:01.864444 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-08 00:38:01.913166 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:38:01.913197 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-08 00:38:01.913209 | orchestrator | + local max_attempts=60 2025-09-08 00:38:01.913220 | orchestrator | + local name=osism-ansible 2025-09-08 00:38:01.913231 | orchestrator | + local attempt_num=1 2025-09-08 00:38:01.913415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-08 00:38:01.946667 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:38:01.946694 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-08 00:38:01.946706 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-08 00:38:02.139823 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-08 00:38:02.513423 | orchestrator | ARA in osism-ansible already disabled. 2025-09-08 00:38:02.697549 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-08 00:38:02.698516 | orchestrator | + osism apply gather-facts 2025-09-08 00:38:15.013078 | orchestrator | 2025-09-08 00:38:15 | INFO  | Task df823c33-b14d-4af5-a882-6fef5a73c33b (gather-facts) was prepared for execution. 2025-09-08 00:38:15.013205 | orchestrator | 2025-09-08 00:38:15 | INFO  | It takes a moment until task df823c33-b14d-4af5-a882-6fef5a73c33b (gather-facts) has been started and output is visible here. 2025-09-08 00:38:28.171509 | orchestrator | 2025-09-08 00:38:28.171640 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:38:28.171659 | orchestrator | 2025-09-08 00:38:28.171699 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:38:28.171712 | orchestrator | Monday 08 September 2025 00:38:19 +0000 (0:00:00.220) 0:00:00.220 ****** 2025-09-08 00:38:28.171723 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:38:28.171735 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:38:28.171746 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:38:28.171757 | orchestrator | ok: [testbed-manager] 2025-09-08 00:38:28.171768 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:38:28.171779 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:38:28.171789 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:38:28.171800 | orchestrator | 2025-09-08 00:38:28.171811 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-08 00:38:28.171822 | orchestrator | 2025-09-08 00:38:28.171833 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-08 00:38:28.171844 | orchestrator | Monday 08 September 2025 00:38:27 +0000 (0:00:08.171) 0:00:08.392 ****** 2025-09-08 00:38:28.171855 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:38:28.171866 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:38:28.171877 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:38:28.171888 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:38:28.171898 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:38:28.171909 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:38:28.171920 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:38:28.171979 | orchestrator | 2025-09-08 00:38:28.171990 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:38:28.172002 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:28.172015 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:28.172026 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:28.172039 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:28.172052 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:28.172065 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:28.172079 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:28.172091 | orchestrator | 2025-09-08 00:38:28.172104 | orchestrator | 2025-09-08 00:38:28.172116 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:38:28.172129 | orchestrator | Monday 08 September 2025 00:38:27 +0000 (0:00:00.503) 0:00:08.896 ****** 2025-09-08 00:38:28.172143 | orchestrator | =============================================================================== 2025-09-08 00:38:28.172156 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.17s 2025-09-08 00:38:28.172169 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-09-08 00:38:28.472608 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-08 00:38:28.491494 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-08 00:38:28.504170 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-08 00:38:28.526472 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-08 00:38:28.537724 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-08 00:38:28.549433 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-08 00:38:28.571597 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-08 00:38:28.585732 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-08 00:38:28.603740 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-08 00:38:28.619137 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-08 00:38:28.629639 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-08 00:38:28.643594 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-08 00:38:28.656440 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-08 00:38:28.668599 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-08 00:38:28.690774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-08 00:38:28.717390 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-08 00:38:28.737709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-08 00:38:28.749578 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-08 00:38:28.761128 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-08 00:38:28.774271 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-08 00:38:28.788210 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-08 00:38:29.285666 | orchestrator | ok: Runtime: 0:23:51.313299 2025-09-08 00:38:29.387017 | 2025-09-08 00:38:29.387142 | TASK [Deploy services] 2025-09-08 00:38:29.919051 | orchestrator | skipping: Conditional result was False 2025-09-08 00:38:29.937387 | 2025-09-08 00:38:29.937530 | TASK [Deploy in a nutshell] 2025-09-08 00:38:30.623473 | orchestrator | + set -e 2025-09-08 00:38:30.623648 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-08 00:38:30.623672 | orchestrator | ++ export INTERACTIVE=false 2025-09-08 00:38:30.623693 | orchestrator | ++ INTERACTIVE=false 2025-09-08 00:38:30.623706 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-08 00:38:30.623719 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-08 00:38:30.623732 | orchestrator | + source /opt/manager-vars.sh 2025-09-08 00:38:30.623776 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-08 00:38:30.623805 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-08 00:38:30.623819 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-08 00:38:30.623835 | orchestrator | ++ CEPH_VERSION=reef 2025-09-08 00:38:30.623847 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-08 00:38:30.623866 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-08 00:38:30.623877 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-08 00:38:30.623897 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-08 00:38:30.623908 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-08 00:38:30.623952 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-08 00:38:30.623966 | orchestrator | ++ export ARA=false 2025-09-08 00:38:30.623978 | orchestrator | ++ ARA=false 2025-09-08 00:38:30.623989 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-08 00:38:30.624001 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-08 00:38:30.624011 | orchestrator | ++ export TEMPEST=true 2025-09-08 00:38:30.624022 | orchestrator | ++ TEMPEST=true 2025-09-08 00:38:30.624033 | orchestrator | ++ export IS_ZUUL=true 2025-09-08 00:38:30.624044 | orchestrator | ++ IS_ZUUL=true 2025-09-08 00:38:30.624055 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.100 2025-09-08 00:38:30.624066 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.100 2025-09-08 00:38:30.624089 | orchestrator | ++ export EXTERNAL_API=false 2025-09-08 00:38:30.624100 | orchestrator | ++ EXTERNAL_API=false 2025-09-08 00:38:30.624111 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-08 00:38:30.624122 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-08 00:38:30.624133 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-08 00:38:30.624143 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-08 00:38:30.624155 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-08 00:38:30.624166 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-08 00:38:30.624177 | orchestrator | 2025-09-08 00:38:30.624188 | orchestrator | # PULL IMAGES 2025-09-08 00:38:30.624199 | orchestrator | 2025-09-08 00:38:30.624210 | orchestrator | + echo 2025-09-08 00:38:30.624221 | orchestrator | + echo '# PULL IMAGES' 2025-09-08 00:38:30.624232 | orchestrator | + echo 2025-09-08 00:38:30.625135 | orchestrator | ++ semver latest 7.0.0 2025-09-08 00:38:30.690345 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-08 00:38:30.690376 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-08 00:38:30.690397 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-08 00:38:32.580207 | orchestrator | 2025-09-08 00:38:32 | INFO  | Trying to run play pull-images in environment custom 2025-09-08 00:38:42.693446 | orchestrator | 2025-09-08 00:38:42 | INFO  | Task c8b52219-d90a-4a4f-86fa-0d309fc7e7bd (pull-images) was prepared for execution. 2025-09-08 00:38:42.693578 | orchestrator | 2025-09-08 00:38:42 | INFO  | Task c8b52219-d90a-4a4f-86fa-0d309fc7e7bd is running in background. No more output. Check ARA for logs. 2025-09-08 00:38:44.961503 | orchestrator | 2025-09-08 00:38:44 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-08 00:38:55.060132 | orchestrator | 2025-09-08 00:38:55 | INFO  | Task 24fc5ba6-c9d1-4f97-9a47-fca998320473 (wipe-partitions) was prepared for execution. 2025-09-08 00:38:55.060299 | orchestrator | 2025-09-08 00:38:55 | INFO  | It takes a moment until task 24fc5ba6-c9d1-4f97-9a47-fca998320473 (wipe-partitions) has been started and output is visible here. 2025-09-08 00:39:07.301592 | orchestrator | 2025-09-08 00:39:07.301739 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-08 00:39:07.301761 | orchestrator | 2025-09-08 00:39:07.301773 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-08 00:39:07.301792 | orchestrator | Monday 08 September 2025 00:38:59 +0000 (0:00:00.150) 0:00:00.150 ****** 2025-09-08 00:39:07.301807 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:39:07.301819 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:39:07.301832 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:39:07.301852 | orchestrator | 2025-09-08 00:39:07.301871 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-08 00:39:07.301992 | orchestrator | Monday 08 September 2025 00:38:59 +0000 (0:00:00.621) 0:00:00.771 ****** 2025-09-08 00:39:07.302005 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:07.302069 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:07.302087 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:39:07.302101 | orchestrator | 2025-09-08 00:39:07.302114 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-08 00:39:07.302127 | orchestrator | Monday 08 September 2025 00:39:00 +0000 (0:00:00.236) 0:00:01.008 ****** 2025-09-08 00:39:07.302140 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:07.302154 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:07.302167 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:39:07.302179 | orchestrator | 2025-09-08 00:39:07.302191 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-08 00:39:07.302206 | orchestrator | Monday 08 September 2025 00:39:00 +0000 (0:00:00.733) 0:00:01.742 ****** 2025-09-08 00:39:07.302218 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:07.302231 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:07.302244 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:39:07.302256 | orchestrator | 2025-09-08 00:39:07.302268 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-08 00:39:07.302281 | orchestrator | Monday 08 September 2025 00:39:01 +0000 (0:00:00.276) 0:00:02.019 ****** 2025-09-08 00:39:07.302295 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-08 00:39:07.302313 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-08 00:39:07.302326 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-08 00:39:07.302339 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-08 00:39:07.302352 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-08 00:39:07.302364 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-08 00:39:07.302377 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-08 00:39:07.302389 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-08 00:39:07.302402 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-08 00:39:07.302415 | orchestrator | 2025-09-08 00:39:07.302429 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-08 00:39:07.302443 | orchestrator | Monday 08 September 2025 00:39:02 +0000 (0:00:01.147) 0:00:03.166 ****** 2025-09-08 00:39:07.302453 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-08 00:39:07.302464 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-08 00:39:07.302475 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-08 00:39:07.302486 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-08 00:39:07.302496 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-08 00:39:07.302507 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-08 00:39:07.302517 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-08 00:39:07.302528 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-08 00:39:07.302538 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-08 00:39:07.302549 | orchestrator | 2025-09-08 00:39:07.302559 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-08 00:39:07.302570 | orchestrator | Monday 08 September 2025 00:39:03 +0000 (0:00:01.334) 0:00:04.501 ****** 2025-09-08 00:39:07.302580 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-08 00:39:07.302591 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-08 00:39:07.302601 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-08 00:39:07.302612 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-08 00:39:07.302623 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-08 00:39:07.302640 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-08 00:39:07.302651 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-08 00:39:07.302670 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-08 00:39:07.302681 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-08 00:39:07.302692 | orchestrator | 2025-09-08 00:39:07.302702 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-08 00:39:07.302713 | orchestrator | Monday 08 September 2025 00:39:05 +0000 (0:00:02.190) 0:00:06.691 ****** 2025-09-08 00:39:07.302724 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:39:07.302734 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:39:07.302745 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:39:07.302755 | orchestrator | 2025-09-08 00:39:07.302766 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-08 00:39:07.302777 | orchestrator | Monday 08 September 2025 00:39:06 +0000 (0:00:00.611) 0:00:07.303 ****** 2025-09-08 00:39:07.302787 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:39:07.302798 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:39:07.302808 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:39:07.302819 | orchestrator | 2025-09-08 00:39:07.302830 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:39:07.302842 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:07.302855 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:07.302905 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:07.302918 | orchestrator | 2025-09-08 00:39:07.302929 | orchestrator | 2025-09-08 00:39:07.302940 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:39:07.302951 | orchestrator | Monday 08 September 2025 00:39:06 +0000 (0:00:00.628) 0:00:07.932 ****** 2025-09-08 00:39:07.302961 | orchestrator | =============================================================================== 2025-09-08 00:39:07.302972 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2025-09-08 00:39:07.302983 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-09-08 00:39:07.302993 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-09-08 00:39:07.303004 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-09-08 00:39:07.303015 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-09-08 00:39:07.303025 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2025-09-08 00:39:07.303036 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-09-08 00:39:07.303047 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-09-08 00:39:07.303058 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-09-08 00:39:19.557537 | orchestrator | 2025-09-08 00:39:19 | INFO  | Task af937665-8658-40a0-b8d4-27d249e00404 (facts) was prepared for execution. 2025-09-08 00:39:19.557648 | orchestrator | 2025-09-08 00:39:19 | INFO  | It takes a moment until task af937665-8658-40a0-b8d4-27d249e00404 (facts) has been started and output is visible here. 2025-09-08 00:39:31.350184 | orchestrator | 2025-09-08 00:39:31.350314 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-08 00:39:31.350332 | orchestrator | 2025-09-08 00:39:31.350344 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-08 00:39:31.350356 | orchestrator | Monday 08 September 2025 00:39:23 +0000 (0:00:00.274) 0:00:00.274 ****** 2025-09-08 00:39:31.350367 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:39:31.350380 | orchestrator | ok: [testbed-manager] 2025-09-08 00:39:31.350390 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:39:31.350431 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:39:31.350442 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:31.350453 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:31.350464 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:39:31.350474 | orchestrator | 2025-09-08 00:39:31.350488 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-08 00:39:31.350499 | orchestrator | Monday 08 September 2025 00:39:24 +0000 (0:00:01.042) 0:00:01.316 ****** 2025-09-08 00:39:31.350510 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:39:31.350521 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:39:31.350532 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:39:31.350543 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:39:31.350554 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:31.350564 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:31.350575 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:39:31.350586 | orchestrator | 2025-09-08 00:39:31.350597 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:39:31.350607 | orchestrator | 2025-09-08 00:39:31.350618 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:39:31.350629 | orchestrator | Monday 08 September 2025 00:39:25 +0000 (0:00:01.091) 0:00:02.408 ****** 2025-09-08 00:39:31.350640 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:39:31.350650 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:39:31.350662 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:39:31.350676 | orchestrator | ok: [testbed-manager] 2025-09-08 00:39:31.350689 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:31.350701 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:31.350713 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:39:31.350725 | orchestrator | 2025-09-08 00:39:31.350739 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-08 00:39:31.350751 | orchestrator | 2025-09-08 00:39:31.350764 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-08 00:39:31.350796 | orchestrator | Monday 08 September 2025 00:39:30 +0000 (0:00:04.467) 0:00:06.875 ****** 2025-09-08 00:39:31.350810 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:39:31.350823 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:39:31.350835 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:39:31.350876 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:39:31.350889 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:31.350902 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:31.350914 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:39:31.350926 | orchestrator | 2025-09-08 00:39:31.350938 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:39:31.350951 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:31.350966 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:31.350978 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:31.350992 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:31.351005 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:31.351018 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:31.351029 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:31.351040 | orchestrator | 2025-09-08 00:39:31.351058 | orchestrator | 2025-09-08 00:39:31.351070 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:39:31.351080 | orchestrator | Monday 08 September 2025 00:39:30 +0000 (0:00:00.743) 0:00:07.619 ****** 2025-09-08 00:39:31.351091 | orchestrator | =============================================================================== 2025-09-08 00:39:31.351102 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.47s 2025-09-08 00:39:31.351113 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.09s 2025-09-08 00:39:31.351123 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.04s 2025-09-08 00:39:31.351134 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.74s 2025-09-08 00:39:33.689057 | orchestrator | 2025-09-08 00:39:33 | INFO  | Task 87e68b0a-d938-4147-a105-c948a934cec6 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-08 00:39:33.689163 | orchestrator | 2025-09-08 00:39:33 | INFO  | It takes a moment until task 87e68b0a-d938-4147-a105-c948a934cec6 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-08 00:39:45.482745 | orchestrator | 2025-09-08 00:39:45.482889 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-08 00:39:45.482903 | orchestrator | 2025-09-08 00:39:45.482911 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:39:45.482922 | orchestrator | Monday 08 September 2025 00:39:37 +0000 (0:00:00.328) 0:00:00.328 ****** 2025-09-08 00:39:45.482930 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:45.482938 | orchestrator | 2025-09-08 00:39:45.482946 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:39:45.482953 | orchestrator | Monday 08 September 2025 00:39:38 +0000 (0:00:00.259) 0:00:00.587 ****** 2025-09-08 00:39:45.482960 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:45.482968 | orchestrator | 2025-09-08 00:39:45.482976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.482983 | orchestrator | Monday 08 September 2025 00:39:38 +0000 (0:00:00.236) 0:00:00.824 ****** 2025-09-08 00:39:45.482990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:39:45.482998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:39:45.483006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:39:45.483013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:39:45.483020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:39:45.483027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:39:45.483034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:39:45.483041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:39:45.483048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-08 00:39:45.483056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:39:45.483063 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:39:45.483077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:39:45.483084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:39:45.483091 | orchestrator | 2025-09-08 00:39:45.483099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483106 | orchestrator | Monday 08 September 2025 00:39:38 +0000 (0:00:00.369) 0:00:01.193 ****** 2025-09-08 00:39:45.483113 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483137 | orchestrator | 2025-09-08 00:39:45.483145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483152 | orchestrator | Monday 08 September 2025 00:39:39 +0000 (0:00:00.473) 0:00:01.667 ****** 2025-09-08 00:39:45.483159 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483166 | orchestrator | 2025-09-08 00:39:45.483173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483180 | orchestrator | Monday 08 September 2025 00:39:39 +0000 (0:00:00.204) 0:00:01.871 ****** 2025-09-08 00:39:45.483187 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483194 | orchestrator | 2025-09-08 00:39:45.483201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483208 | orchestrator | Monday 08 September 2025 00:39:39 +0000 (0:00:00.193) 0:00:02.064 ****** 2025-09-08 00:39:45.483215 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483226 | orchestrator | 2025-09-08 00:39:45.483234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483241 | orchestrator | Monday 08 September 2025 00:39:39 +0000 (0:00:00.188) 0:00:02.253 ****** 2025-09-08 00:39:45.483248 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483255 | orchestrator | 2025-09-08 00:39:45.483263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483270 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.198) 0:00:02.452 ****** 2025-09-08 00:39:45.483277 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483284 | orchestrator | 2025-09-08 00:39:45.483291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483298 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.194) 0:00:02.646 ****** 2025-09-08 00:39:45.483305 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483312 | orchestrator | 2025-09-08 00:39:45.483319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483326 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.214) 0:00:02.861 ****** 2025-09-08 00:39:45.483334 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483341 | orchestrator | 2025-09-08 00:39:45.483348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483355 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.204) 0:00:03.066 ****** 2025-09-08 00:39:45.483362 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691) 2025-09-08 00:39:45.483370 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691) 2025-09-08 00:39:45.483377 | orchestrator | 2025-09-08 00:39:45.483384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483391 | orchestrator | Monday 08 September 2025 00:39:41 +0000 (0:00:00.419) 0:00:03.485 ****** 2025-09-08 00:39:45.483413 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733) 2025-09-08 00:39:45.483420 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733) 2025-09-08 00:39:45.483428 | orchestrator | 2025-09-08 00:39:45.483435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483442 | orchestrator | Monday 08 September 2025 00:39:41 +0000 (0:00:00.398) 0:00:03.884 ****** 2025-09-08 00:39:45.483449 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1) 2025-09-08 00:39:45.483456 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1) 2025-09-08 00:39:45.483463 | orchestrator | 2025-09-08 00:39:45.483470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483477 | orchestrator | Monday 08 September 2025 00:39:42 +0000 (0:00:00.650) 0:00:04.535 ****** 2025-09-08 00:39:45.483484 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41) 2025-09-08 00:39:45.483497 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41) 2025-09-08 00:39:45.483504 | orchestrator | 2025-09-08 00:39:45.483511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:45.483518 | orchestrator | Monday 08 September 2025 00:39:42 +0000 (0:00:00.618) 0:00:05.153 ****** 2025-09-08 00:39:45.483526 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:39:45.483533 | orchestrator | 2025-09-08 00:39:45.483540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483551 | orchestrator | Monday 08 September 2025 00:39:43 +0000 (0:00:00.735) 0:00:05.888 ****** 2025-09-08 00:39:45.483558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:39:45.483565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:39:45.483572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:39:45.483579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:39:45.483586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:39:45.483594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:39:45.483601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:39:45.483608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:39:45.483615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-08 00:39:45.483622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:39:45.483629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:39:45.483636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:39:45.483643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:39:45.483650 | orchestrator | 2025-09-08 00:39:45.483658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483665 | orchestrator | Monday 08 September 2025 00:39:43 +0000 (0:00:00.387) 0:00:06.276 ****** 2025-09-08 00:39:45.483672 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483679 | orchestrator | 2025-09-08 00:39:45.483686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483693 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.211) 0:00:06.488 ****** 2025-09-08 00:39:45.483700 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483708 | orchestrator | 2025-09-08 00:39:45.483715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483722 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.197) 0:00:06.685 ****** 2025-09-08 00:39:45.483729 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483736 | orchestrator | 2025-09-08 00:39:45.483743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483750 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.203) 0:00:06.889 ****** 2025-09-08 00:39:45.483757 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483764 | orchestrator | 2025-09-08 00:39:45.483772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483779 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.211) 0:00:07.101 ****** 2025-09-08 00:39:45.483786 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483793 | orchestrator | 2025-09-08 00:39:45.483805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483812 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.194) 0:00:07.295 ****** 2025-09-08 00:39:45.483819 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483826 | orchestrator | 2025-09-08 00:39:45.483848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483855 | orchestrator | Monday 08 September 2025 00:39:45 +0000 (0:00:00.202) 0:00:07.498 ****** 2025-09-08 00:39:45.483862 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:45.483870 | orchestrator | 2025-09-08 00:39:45.483877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:45.483884 | orchestrator | Monday 08 September 2025 00:39:45 +0000 (0:00:00.196) 0:00:07.694 ****** 2025-09-08 00:39:45.483896 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.286424 | orchestrator | 2025-09-08 00:39:53.286532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:53.286550 | orchestrator | Monday 08 September 2025 00:39:45 +0000 (0:00:00.186) 0:00:07.881 ****** 2025-09-08 00:39:53.286564 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-08 00:39:53.286576 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-08 00:39:53.286587 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-08 00:39:53.286598 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-08 00:39:53.286609 | orchestrator | 2025-09-08 00:39:53.286620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:53.286630 | orchestrator | Monday 08 September 2025 00:39:46 +0000 (0:00:01.117) 0:00:08.998 ****** 2025-09-08 00:39:53.286641 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.286652 | orchestrator | 2025-09-08 00:39:53.286663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:53.286673 | orchestrator | Monday 08 September 2025 00:39:46 +0000 (0:00:00.207) 0:00:09.206 ****** 2025-09-08 00:39:53.286684 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.286695 | orchestrator | 2025-09-08 00:39:53.286705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:53.286716 | orchestrator | Monday 08 September 2025 00:39:46 +0000 (0:00:00.200) 0:00:09.406 ****** 2025-09-08 00:39:53.286727 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.286738 | orchestrator | 2025-09-08 00:39:53.286748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:53.286759 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.229) 0:00:09.636 ****** 2025-09-08 00:39:53.286769 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.286780 | orchestrator | 2025-09-08 00:39:53.286791 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-08 00:39:53.286802 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.193) 0:00:09.830 ****** 2025-09-08 00:39:53.286812 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-08 00:39:53.286856 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-08 00:39:53.286869 | orchestrator | 2025-09-08 00:39:53.286879 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-08 00:39:53.286890 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.175) 0:00:10.005 ****** 2025-09-08 00:39:53.286914 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.286926 | orchestrator | 2025-09-08 00:39:53.286936 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-08 00:39:53.286947 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.138) 0:00:10.144 ****** 2025-09-08 00:39:53.286958 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.286971 | orchestrator | 2025-09-08 00:39:53.286983 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-08 00:39:53.286996 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.140) 0:00:10.284 ****** 2025-09-08 00:39:53.287009 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287045 | orchestrator | 2025-09-08 00:39:53.287059 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-08 00:39:53.287073 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.144) 0:00:10.429 ****** 2025-09-08 00:39:53.287083 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:53.287094 | orchestrator | 2025-09-08 00:39:53.287105 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-08 00:39:53.287115 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.144) 0:00:10.573 ****** 2025-09-08 00:39:53.287126 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b18b724-0587-5812-9148-41071cea985b'}}) 2025-09-08 00:39:53.287137 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b42feaf-b3bc-5f68-b3eb-37674b93132b'}}) 2025-09-08 00:39:53.287148 | orchestrator | 2025-09-08 00:39:53.287158 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-08 00:39:53.287169 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.178) 0:00:10.752 ****** 2025-09-08 00:39:53.287180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b18b724-0587-5812-9148-41071cea985b'}})  2025-09-08 00:39:53.287199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b42feaf-b3bc-5f68-b3eb-37674b93132b'}})  2025-09-08 00:39:53.287210 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287220 | orchestrator | 2025-09-08 00:39:53.287231 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-08 00:39:53.287242 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.157) 0:00:10.909 ****** 2025-09-08 00:39:53.287252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b18b724-0587-5812-9148-41071cea985b'}})  2025-09-08 00:39:53.287263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b42feaf-b3bc-5f68-b3eb-37674b93132b'}})  2025-09-08 00:39:53.287273 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287284 | orchestrator | 2025-09-08 00:39:53.287294 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-08 00:39:53.287305 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.344) 0:00:11.254 ****** 2025-09-08 00:39:53.287315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b18b724-0587-5812-9148-41071cea985b'}})  2025-09-08 00:39:53.287326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b42feaf-b3bc-5f68-b3eb-37674b93132b'}})  2025-09-08 00:39:53.287337 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287347 | orchestrator | 2025-09-08 00:39:53.287375 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-08 00:39:53.287387 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.148) 0:00:11.403 ****** 2025-09-08 00:39:53.287397 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:53.287408 | orchestrator | 2025-09-08 00:39:53.287424 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-08 00:39:53.287435 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.155) 0:00:11.558 ****** 2025-09-08 00:39:53.287446 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:53.287457 | orchestrator | 2025-09-08 00:39:53.287468 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-08 00:39:53.287479 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.159) 0:00:11.717 ****** 2025-09-08 00:39:53.287489 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287500 | orchestrator | 2025-09-08 00:39:53.287510 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-08 00:39:53.287521 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.141) 0:00:11.858 ****** 2025-09-08 00:39:53.287532 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287543 | orchestrator | 2025-09-08 00:39:53.287561 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-08 00:39:53.287572 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.133) 0:00:11.992 ****** 2025-09-08 00:39:53.287583 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287594 | orchestrator | 2025-09-08 00:39:53.287604 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-08 00:39:53.287615 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.139) 0:00:12.131 ****** 2025-09-08 00:39:53.287626 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:39:53.287637 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:39:53.287648 | orchestrator |  "sdb": { 2025-09-08 00:39:53.287660 | orchestrator |  "osd_lvm_uuid": "6b18b724-0587-5812-9148-41071cea985b" 2025-09-08 00:39:53.287671 | orchestrator |  }, 2025-09-08 00:39:53.287682 | orchestrator |  "sdc": { 2025-09-08 00:39:53.287692 | orchestrator |  "osd_lvm_uuid": "9b42feaf-b3bc-5f68-b3eb-37674b93132b" 2025-09-08 00:39:53.287703 | orchestrator |  } 2025-09-08 00:39:53.287713 | orchestrator |  } 2025-09-08 00:39:53.287724 | orchestrator | } 2025-09-08 00:39:53.287735 | orchestrator | 2025-09-08 00:39:53.287746 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-08 00:39:53.287756 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.145) 0:00:12.277 ****** 2025-09-08 00:39:53.287767 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287777 | orchestrator | 2025-09-08 00:39:53.287788 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-08 00:39:53.287799 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.133) 0:00:12.410 ****** 2025-09-08 00:39:53.287809 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287820 | orchestrator | 2025-09-08 00:39:53.287846 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-08 00:39:53.287857 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.140) 0:00:12.551 ****** 2025-09-08 00:39:53.287868 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:53.287878 | orchestrator | 2025-09-08 00:39:53.287889 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-08 00:39:53.287899 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.137) 0:00:12.688 ****** 2025-09-08 00:39:53.287910 | orchestrator | changed: [testbed-node-3] => { 2025-09-08 00:39:53.287921 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-08 00:39:53.287932 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:39:53.287942 | orchestrator |  "sdb": { 2025-09-08 00:39:53.287953 | orchestrator |  "osd_lvm_uuid": "6b18b724-0587-5812-9148-41071cea985b" 2025-09-08 00:39:53.287964 | orchestrator |  }, 2025-09-08 00:39:53.287975 | orchestrator |  "sdc": { 2025-09-08 00:39:53.287985 | orchestrator |  "osd_lvm_uuid": "9b42feaf-b3bc-5f68-b3eb-37674b93132b" 2025-09-08 00:39:53.287996 | orchestrator |  } 2025-09-08 00:39:53.288007 | orchestrator |  }, 2025-09-08 00:39:53.288017 | orchestrator |  "lvm_volumes": [ 2025-09-08 00:39:53.288028 | orchestrator |  { 2025-09-08 00:39:53.288039 | orchestrator |  "data": "osd-block-6b18b724-0587-5812-9148-41071cea985b", 2025-09-08 00:39:53.288050 | orchestrator |  "data_vg": "ceph-6b18b724-0587-5812-9148-41071cea985b" 2025-09-08 00:39:53.288060 | orchestrator |  }, 2025-09-08 00:39:53.288071 | orchestrator |  { 2025-09-08 00:39:53.288082 | orchestrator |  "data": "osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b", 2025-09-08 00:39:53.288093 | orchestrator |  "data_vg": "ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b" 2025-09-08 00:39:53.288103 | orchestrator |  } 2025-09-08 00:39:53.288114 | orchestrator |  ] 2025-09-08 00:39:53.288124 | orchestrator |  } 2025-09-08 00:39:53.288135 | orchestrator | } 2025-09-08 00:39:53.288146 | orchestrator | 2025-09-08 00:39:53.288162 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-08 00:39:53.288180 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.213) 0:00:12.901 ****** 2025-09-08 00:39:53.288190 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:53.288201 | orchestrator | 2025-09-08 00:39:53.288212 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-08 00:39:53.288222 | orchestrator | 2025-09-08 00:39:53.288233 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:39:53.288244 | orchestrator | Monday 08 September 2025 00:39:52 +0000 (0:00:02.275) 0:00:15.177 ****** 2025-09-08 00:39:53.288254 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:53.288265 | orchestrator | 2025-09-08 00:39:53.288276 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:39:53.288286 | orchestrator | Monday 08 September 2025 00:39:53 +0000 (0:00:00.266) 0:00:15.444 ****** 2025-09-08 00:39:53.288297 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:53.288308 | orchestrator | 2025-09-08 00:39:53.288318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:53.288336 | orchestrator | Monday 08 September 2025 00:39:53 +0000 (0:00:00.241) 0:00:15.685 ****** 2025-09-08 00:40:01.240214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:40:01.240351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:40:01.240367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:40:01.240379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:40:01.240406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:40:01.240459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:40:01.240473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:40:01.240486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:40:01.240498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-08 00:40:01.240510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:40:01.240521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:40:01.240532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:40:01.240544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:40:01.240561 | orchestrator | 2025-09-08 00:40:01.240574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240587 | orchestrator | Monday 08 September 2025 00:39:53 +0000 (0:00:00.365) 0:00:16.051 ****** 2025-09-08 00:40:01.240598 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240611 | orchestrator | 2025-09-08 00:40:01.240622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240633 | orchestrator | Monday 08 September 2025 00:39:53 +0000 (0:00:00.184) 0:00:16.236 ****** 2025-09-08 00:40:01.240644 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240655 | orchestrator | 2025-09-08 00:40:01.240666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240677 | orchestrator | Monday 08 September 2025 00:39:54 +0000 (0:00:00.191) 0:00:16.427 ****** 2025-09-08 00:40:01.240688 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240699 | orchestrator | 2025-09-08 00:40:01.240710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240721 | orchestrator | Monday 08 September 2025 00:39:54 +0000 (0:00:00.201) 0:00:16.629 ****** 2025-09-08 00:40:01.240732 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240769 | orchestrator | 2025-09-08 00:40:01.240781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240792 | orchestrator | Monday 08 September 2025 00:39:54 +0000 (0:00:00.211) 0:00:16.840 ****** 2025-09-08 00:40:01.240802 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240842 | orchestrator | 2025-09-08 00:40:01.240854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240865 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.623) 0:00:17.464 ****** 2025-09-08 00:40:01.240876 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240887 | orchestrator | 2025-09-08 00:40:01.240898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240909 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.187) 0:00:17.651 ****** 2025-09-08 00:40:01.240939 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240950 | orchestrator | 2025-09-08 00:40:01.240961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.240972 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.205) 0:00:17.856 ****** 2025-09-08 00:40:01.240983 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.240994 | orchestrator | 2025-09-08 00:40:01.241005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.241016 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.238) 0:00:18.095 ****** 2025-09-08 00:40:01.241027 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3) 2025-09-08 00:40:01.241039 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3) 2025-09-08 00:40:01.241050 | orchestrator | 2025-09-08 00:40:01.241061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.241072 | orchestrator | Monday 08 September 2025 00:39:56 +0000 (0:00:00.437) 0:00:18.532 ****** 2025-09-08 00:40:01.241083 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab) 2025-09-08 00:40:01.241094 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab) 2025-09-08 00:40:01.241104 | orchestrator | 2025-09-08 00:40:01.241115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.241126 | orchestrator | Monday 08 September 2025 00:39:56 +0000 (0:00:00.419) 0:00:18.951 ****** 2025-09-08 00:40:01.241137 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55) 2025-09-08 00:40:01.241148 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55) 2025-09-08 00:40:01.241159 | orchestrator | 2025-09-08 00:40:01.241170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.241182 | orchestrator | Monday 08 September 2025 00:39:56 +0000 (0:00:00.423) 0:00:19.374 ****** 2025-09-08 00:40:01.241213 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e) 2025-09-08 00:40:01.241225 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e) 2025-09-08 00:40:01.241236 | orchestrator | 2025-09-08 00:40:01.241248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:01.241259 | orchestrator | Monday 08 September 2025 00:39:57 +0000 (0:00:00.454) 0:00:19.829 ****** 2025-09-08 00:40:01.241270 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:40:01.241281 | orchestrator | 2025-09-08 00:40:01.241291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241302 | orchestrator | Monday 08 September 2025 00:39:57 +0000 (0:00:00.334) 0:00:20.164 ****** 2025-09-08 00:40:01.241313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:40:01.241335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:40:01.241346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:40:01.241357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:40:01.241368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:40:01.241379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:40:01.241390 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:40:01.241401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:40:01.241412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-08 00:40:01.241422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:40:01.241433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:40:01.241444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:40:01.241455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:40:01.241466 | orchestrator | 2025-09-08 00:40:01.241477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241488 | orchestrator | Monday 08 September 2025 00:39:58 +0000 (0:00:00.387) 0:00:20.551 ****** 2025-09-08 00:40:01.241499 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241510 | orchestrator | 2025-09-08 00:40:01.241521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241531 | orchestrator | Monday 08 September 2025 00:39:58 +0000 (0:00:00.204) 0:00:20.756 ****** 2025-09-08 00:40:01.241542 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241553 | orchestrator | 2025-09-08 00:40:01.241570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241581 | orchestrator | Monday 08 September 2025 00:39:59 +0000 (0:00:00.728) 0:00:21.484 ****** 2025-09-08 00:40:01.241592 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241603 | orchestrator | 2025-09-08 00:40:01.241614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241625 | orchestrator | Monday 08 September 2025 00:39:59 +0000 (0:00:00.280) 0:00:21.765 ****** 2025-09-08 00:40:01.241636 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241647 | orchestrator | 2025-09-08 00:40:01.241658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241669 | orchestrator | Monday 08 September 2025 00:39:59 +0000 (0:00:00.266) 0:00:22.031 ****** 2025-09-08 00:40:01.241680 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241690 | orchestrator | 2025-09-08 00:40:01.241701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241712 | orchestrator | Monday 08 September 2025 00:39:59 +0000 (0:00:00.209) 0:00:22.241 ****** 2025-09-08 00:40:01.241723 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241734 | orchestrator | 2025-09-08 00:40:01.241745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241756 | orchestrator | Monday 08 September 2025 00:40:00 +0000 (0:00:00.187) 0:00:22.428 ****** 2025-09-08 00:40:01.241766 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241777 | orchestrator | 2025-09-08 00:40:01.241788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241799 | orchestrator | Monday 08 September 2025 00:40:00 +0000 (0:00:00.188) 0:00:22.616 ****** 2025-09-08 00:40:01.241810 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241855 | orchestrator | 2025-09-08 00:40:01.241866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241885 | orchestrator | Monday 08 September 2025 00:40:00 +0000 (0:00:00.185) 0:00:22.802 ****** 2025-09-08 00:40:01.241896 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-08 00:40:01.241907 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-08 00:40:01.241918 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-08 00:40:01.241929 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-08 00:40:01.241940 | orchestrator | 2025-09-08 00:40:01.241951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:01.241962 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.635) 0:00:23.437 ****** 2025-09-08 00:40:01.241973 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:01.241984 | orchestrator | 2025-09-08 00:40:01.242003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:07.085644 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.201) 0:00:23.639 ****** 2025-09-08 00:40:07.085748 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.085762 | orchestrator | 2025-09-08 00:40:07.085773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:07.085782 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.189) 0:00:23.828 ****** 2025-09-08 00:40:07.085791 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.085800 | orchestrator | 2025-09-08 00:40:07.085861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:07.085871 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.181) 0:00:24.010 ****** 2025-09-08 00:40:07.085880 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.085889 | orchestrator | 2025-09-08 00:40:07.085898 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-08 00:40:07.085907 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.206) 0:00:24.217 ****** 2025-09-08 00:40:07.085916 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-08 00:40:07.085926 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-08 00:40:07.085935 | orchestrator | 2025-09-08 00:40:07.085943 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-08 00:40:07.085952 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.366) 0:00:24.583 ****** 2025-09-08 00:40:07.085961 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.085970 | orchestrator | 2025-09-08 00:40:07.085979 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-08 00:40:07.085988 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.142) 0:00:24.726 ****** 2025-09-08 00:40:07.085997 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086006 | orchestrator | 2025-09-08 00:40:07.086057 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-08 00:40:07.086067 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.147) 0:00:24.873 ****** 2025-09-08 00:40:07.086075 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086084 | orchestrator | 2025-09-08 00:40:07.086093 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-08 00:40:07.086101 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.132) 0:00:25.005 ****** 2025-09-08 00:40:07.086110 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:40:07.086120 | orchestrator | 2025-09-08 00:40:07.086128 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-08 00:40:07.086137 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.135) 0:00:25.141 ****** 2025-09-08 00:40:07.086147 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}}) 2025-09-08 00:40:07.086156 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa077d44-869a-533b-aa21-81dea0f926a7'}}) 2025-09-08 00:40:07.086165 | orchestrator | 2025-09-08 00:40:07.086174 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-08 00:40:07.086210 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.168) 0:00:25.309 ****** 2025-09-08 00:40:07.086222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}})  2025-09-08 00:40:07.086234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa077d44-869a-533b-aa21-81dea0f926a7'}})  2025-09-08 00:40:07.086246 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086256 | orchestrator | 2025-09-08 00:40:07.086283 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-08 00:40:07.086295 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.139) 0:00:25.449 ****** 2025-09-08 00:40:07.086306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}})  2025-09-08 00:40:07.086317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa077d44-869a-533b-aa21-81dea0f926a7'}})  2025-09-08 00:40:07.086327 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086338 | orchestrator | 2025-09-08 00:40:07.086348 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-08 00:40:07.086358 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.161) 0:00:25.610 ****** 2025-09-08 00:40:07.086369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}})  2025-09-08 00:40:07.086380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa077d44-869a-533b-aa21-81dea0f926a7'}})  2025-09-08 00:40:07.086391 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086402 | orchestrator | 2025-09-08 00:40:07.086412 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-08 00:40:07.086423 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.156) 0:00:25.767 ****** 2025-09-08 00:40:07.086434 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:40:07.086444 | orchestrator | 2025-09-08 00:40:07.086455 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-08 00:40:07.086466 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.153) 0:00:25.920 ****** 2025-09-08 00:40:07.086476 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:40:07.086487 | orchestrator | 2025-09-08 00:40:07.086497 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-08 00:40:07.086508 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.136) 0:00:26.057 ****** 2025-09-08 00:40:07.086519 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086529 | orchestrator | 2025-09-08 00:40:07.086556 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-08 00:40:07.086565 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.125) 0:00:26.183 ****** 2025-09-08 00:40:07.086573 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086582 | orchestrator | 2025-09-08 00:40:07.086591 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-08 00:40:07.086600 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.242) 0:00:26.425 ****** 2025-09-08 00:40:07.086608 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086617 | orchestrator | 2025-09-08 00:40:07.086626 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-08 00:40:07.086635 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.114) 0:00:26.540 ****** 2025-09-08 00:40:07.086643 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:40:07.086652 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:40:07.086661 | orchestrator |  "sdb": { 2025-09-08 00:40:07.086671 | orchestrator |  "osd_lvm_uuid": "ea3e0024-52d1-5c15-9011-f3e2d7c1d29b" 2025-09-08 00:40:07.086680 | orchestrator |  }, 2025-09-08 00:40:07.086689 | orchestrator |  "sdc": { 2025-09-08 00:40:07.086706 | orchestrator |  "osd_lvm_uuid": "aa077d44-869a-533b-aa21-81dea0f926a7" 2025-09-08 00:40:07.086715 | orchestrator |  } 2025-09-08 00:40:07.086724 | orchestrator |  } 2025-09-08 00:40:07.086733 | orchestrator | } 2025-09-08 00:40:07.086742 | orchestrator | 2025-09-08 00:40:07.086751 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-08 00:40:07.086760 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.120) 0:00:26.660 ****** 2025-09-08 00:40:07.086768 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086777 | orchestrator | 2025-09-08 00:40:07.086786 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-08 00:40:07.086795 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.100) 0:00:26.761 ****** 2025-09-08 00:40:07.086803 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086830 | orchestrator | 2025-09-08 00:40:07.086839 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-08 00:40:07.086848 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.127) 0:00:26.888 ****** 2025-09-08 00:40:07.086856 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:40:07.086865 | orchestrator | 2025-09-08 00:40:07.086874 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-08 00:40:07.086882 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.123) 0:00:27.012 ****** 2025-09-08 00:40:07.086891 | orchestrator | changed: [testbed-node-4] => { 2025-09-08 00:40:07.086899 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-08 00:40:07.086908 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:40:07.086917 | orchestrator |  "sdb": { 2025-09-08 00:40:07.086925 | orchestrator |  "osd_lvm_uuid": "ea3e0024-52d1-5c15-9011-f3e2d7c1d29b" 2025-09-08 00:40:07.086935 | orchestrator |  }, 2025-09-08 00:40:07.086944 | orchestrator |  "sdc": { 2025-09-08 00:40:07.086952 | orchestrator |  "osd_lvm_uuid": "aa077d44-869a-533b-aa21-81dea0f926a7" 2025-09-08 00:40:07.086961 | orchestrator |  } 2025-09-08 00:40:07.086970 | orchestrator |  }, 2025-09-08 00:40:07.086979 | orchestrator |  "lvm_volumes": [ 2025-09-08 00:40:07.086987 | orchestrator |  { 2025-09-08 00:40:07.086996 | orchestrator |  "data": "osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b", 2025-09-08 00:40:07.087005 | orchestrator |  "data_vg": "ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b" 2025-09-08 00:40:07.087014 | orchestrator |  }, 2025-09-08 00:40:07.087022 | orchestrator |  { 2025-09-08 00:40:07.087031 | orchestrator |  "data": "osd-block-aa077d44-869a-533b-aa21-81dea0f926a7", 2025-09-08 00:40:07.087040 | orchestrator |  "data_vg": "ceph-aa077d44-869a-533b-aa21-81dea0f926a7" 2025-09-08 00:40:07.087048 | orchestrator |  } 2025-09-08 00:40:07.087057 | orchestrator |  ] 2025-09-08 00:40:07.087066 | orchestrator |  } 2025-09-08 00:40:07.087074 | orchestrator | } 2025-09-08 00:40:07.087083 | orchestrator | 2025-09-08 00:40:07.087092 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-08 00:40:07.087100 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.179) 0:00:27.192 ****** 2025-09-08 00:40:07.087109 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-08 00:40:07.087118 | orchestrator | 2025-09-08 00:40:07.087126 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-08 00:40:07.087135 | orchestrator | 2025-09-08 00:40:07.087143 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:40:07.087152 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:01.017) 0:00:28.209 ****** 2025-09-08 00:40:07.087161 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-08 00:40:07.087169 | orchestrator | 2025-09-08 00:40:07.087178 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:40:07.087187 | orchestrator | Monday 08 September 2025 00:40:06 +0000 (0:00:00.400) 0:00:28.609 ****** 2025-09-08 00:40:07.087201 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:40:07.087210 | orchestrator | 2025-09-08 00:40:07.087224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:07.087233 | orchestrator | Monday 08 September 2025 00:40:06 +0000 (0:00:00.554) 0:00:29.163 ****** 2025-09-08 00:40:07.087242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:40:07.087250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:40:07.087259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:40:07.087268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:40:07.087276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:40:07.087285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:40:07.087298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:40:15.885360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:40:15.885495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-08 00:40:15.885511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:40:15.885523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:40:15.885534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:40:15.885546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:40:15.885557 | orchestrator | 2025-09-08 00:40:15.885570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885582 | orchestrator | Monday 08 September 2025 00:40:07 +0000 (0:00:00.320) 0:00:29.484 ****** 2025-09-08 00:40:15.885593 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885605 | orchestrator | 2025-09-08 00:40:15.885616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885627 | orchestrator | Monday 08 September 2025 00:40:07 +0000 (0:00:00.170) 0:00:29.654 ****** 2025-09-08 00:40:15.885638 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885649 | orchestrator | 2025-09-08 00:40:15.885660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885670 | orchestrator | Monday 08 September 2025 00:40:07 +0000 (0:00:00.214) 0:00:29.869 ****** 2025-09-08 00:40:15.885681 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885692 | orchestrator | 2025-09-08 00:40:15.885703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885714 | orchestrator | Monday 08 September 2025 00:40:07 +0000 (0:00:00.197) 0:00:30.067 ****** 2025-09-08 00:40:15.885724 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885735 | orchestrator | 2025-09-08 00:40:15.885746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885756 | orchestrator | Monday 08 September 2025 00:40:07 +0000 (0:00:00.182) 0:00:30.250 ****** 2025-09-08 00:40:15.885767 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885778 | orchestrator | 2025-09-08 00:40:15.885788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885833 | orchestrator | Monday 08 September 2025 00:40:08 +0000 (0:00:00.201) 0:00:30.452 ****** 2025-09-08 00:40:15.885845 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885856 | orchestrator | 2025-09-08 00:40:15.885867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885880 | orchestrator | Monday 08 September 2025 00:40:08 +0000 (0:00:00.168) 0:00:30.620 ****** 2025-09-08 00:40:15.885893 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885934 | orchestrator | 2025-09-08 00:40:15.885948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.885961 | orchestrator | Monday 08 September 2025 00:40:08 +0000 (0:00:00.203) 0:00:30.823 ****** 2025-09-08 00:40:15.885973 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.885986 | orchestrator | 2025-09-08 00:40:15.885999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.886011 | orchestrator | Monday 08 September 2025 00:40:08 +0000 (0:00:00.225) 0:00:31.049 ****** 2025-09-08 00:40:15.886083 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8) 2025-09-08 00:40:15.886098 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8) 2025-09-08 00:40:15.886111 | orchestrator | 2025-09-08 00:40:15.886124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.886137 | orchestrator | Monday 08 September 2025 00:40:09 +0000 (0:00:00.724) 0:00:31.773 ****** 2025-09-08 00:40:15.886149 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989) 2025-09-08 00:40:15.886161 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989) 2025-09-08 00:40:15.886173 | orchestrator | 2025-09-08 00:40:15.886187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.886199 | orchestrator | Monday 08 September 2025 00:40:10 +0000 (0:00:01.038) 0:00:32.812 ****** 2025-09-08 00:40:15.886212 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c) 2025-09-08 00:40:15.886225 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c) 2025-09-08 00:40:15.886235 | orchestrator | 2025-09-08 00:40:15.886246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.886257 | orchestrator | Monday 08 September 2025 00:40:10 +0000 (0:00:00.434) 0:00:33.246 ****** 2025-09-08 00:40:15.886267 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9) 2025-09-08 00:40:15.886278 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9) 2025-09-08 00:40:15.886289 | orchestrator | 2025-09-08 00:40:15.886299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:15.886310 | orchestrator | Monday 08 September 2025 00:40:11 +0000 (0:00:00.559) 0:00:33.806 ****** 2025-09-08 00:40:15.886321 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:40:15.886332 | orchestrator | 2025-09-08 00:40:15.886342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886353 | orchestrator | Monday 08 September 2025 00:40:11 +0000 (0:00:00.353) 0:00:34.159 ****** 2025-09-08 00:40:15.886385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:40:15.886397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:40:15.886408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:40:15.886418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:40:15.886429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:40:15.886440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:40:15.886470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:40:15.886482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:40:15.886494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-08 00:40:15.886517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:40:15.886528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:40:15.886539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:40:15.886549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:40:15.886560 | orchestrator | 2025-09-08 00:40:15.886571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886582 | orchestrator | Monday 08 September 2025 00:40:12 +0000 (0:00:00.412) 0:00:34.572 ****** 2025-09-08 00:40:15.886592 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886603 | orchestrator | 2025-09-08 00:40:15.886614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886624 | orchestrator | Monday 08 September 2025 00:40:12 +0000 (0:00:00.221) 0:00:34.793 ****** 2025-09-08 00:40:15.886635 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886646 | orchestrator | 2025-09-08 00:40:15.886657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886667 | orchestrator | Monday 08 September 2025 00:40:12 +0000 (0:00:00.203) 0:00:34.997 ****** 2025-09-08 00:40:15.886678 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886689 | orchestrator | 2025-09-08 00:40:15.886705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886716 | orchestrator | Monday 08 September 2025 00:40:12 +0000 (0:00:00.190) 0:00:35.187 ****** 2025-09-08 00:40:15.886727 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886738 | orchestrator | 2025-09-08 00:40:15.886748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886759 | orchestrator | Monday 08 September 2025 00:40:12 +0000 (0:00:00.198) 0:00:35.386 ****** 2025-09-08 00:40:15.886770 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886780 | orchestrator | 2025-09-08 00:40:15.886791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886821 | orchestrator | Monday 08 September 2025 00:40:13 +0000 (0:00:00.208) 0:00:35.594 ****** 2025-09-08 00:40:15.886832 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886843 | orchestrator | 2025-09-08 00:40:15.886854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886865 | orchestrator | Monday 08 September 2025 00:40:13 +0000 (0:00:00.746) 0:00:36.340 ****** 2025-09-08 00:40:15.886876 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886886 | orchestrator | 2025-09-08 00:40:15.886897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886908 | orchestrator | Monday 08 September 2025 00:40:14 +0000 (0:00:00.196) 0:00:36.537 ****** 2025-09-08 00:40:15.886919 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.886930 | orchestrator | 2025-09-08 00:40:15.886941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.886951 | orchestrator | Monday 08 September 2025 00:40:14 +0000 (0:00:00.259) 0:00:36.797 ****** 2025-09-08 00:40:15.886962 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-08 00:40:15.886973 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-08 00:40:15.886984 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-08 00:40:15.886995 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-08 00:40:15.887006 | orchestrator | 2025-09-08 00:40:15.887017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.887028 | orchestrator | Monday 08 September 2025 00:40:15 +0000 (0:00:00.700) 0:00:37.498 ****** 2025-09-08 00:40:15.887038 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.887049 | orchestrator | 2025-09-08 00:40:15.887060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.887080 | orchestrator | Monday 08 September 2025 00:40:15 +0000 (0:00:00.193) 0:00:37.692 ****** 2025-09-08 00:40:15.887091 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.887102 | orchestrator | 2025-09-08 00:40:15.887113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.887123 | orchestrator | Monday 08 September 2025 00:40:15 +0000 (0:00:00.206) 0:00:37.898 ****** 2025-09-08 00:40:15.887134 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.887145 | orchestrator | 2025-09-08 00:40:15.887156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:15.887167 | orchestrator | Monday 08 September 2025 00:40:15 +0000 (0:00:00.198) 0:00:38.096 ****** 2025-09-08 00:40:15.887177 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:15.887188 | orchestrator | 2025-09-08 00:40:15.887199 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-08 00:40:15.887216 | orchestrator | Monday 08 September 2025 00:40:15 +0000 (0:00:00.186) 0:00:38.283 ****** 2025-09-08 00:40:20.042979 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-08 00:40:20.043088 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-08 00:40:20.043096 | orchestrator | 2025-09-08 00:40:20.043103 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-08 00:40:20.043109 | orchestrator | Monday 08 September 2025 00:40:16 +0000 (0:00:00.163) 0:00:38.446 ****** 2025-09-08 00:40:20.043115 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043121 | orchestrator | 2025-09-08 00:40:20.043126 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-08 00:40:20.043132 | orchestrator | Monday 08 September 2025 00:40:16 +0000 (0:00:00.110) 0:00:38.557 ****** 2025-09-08 00:40:20.043138 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043143 | orchestrator | 2025-09-08 00:40:20.043149 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-08 00:40:20.043154 | orchestrator | Monday 08 September 2025 00:40:16 +0000 (0:00:00.110) 0:00:38.667 ****** 2025-09-08 00:40:20.043160 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043166 | orchestrator | 2025-09-08 00:40:20.043171 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-08 00:40:20.043177 | orchestrator | Monday 08 September 2025 00:40:16 +0000 (0:00:00.109) 0:00:38.777 ****** 2025-09-08 00:40:20.043183 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:40:20.043190 | orchestrator | 2025-09-08 00:40:20.043195 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-08 00:40:20.043201 | orchestrator | Monday 08 September 2025 00:40:16 +0000 (0:00:00.255) 0:00:39.032 ****** 2025-09-08 00:40:20.043208 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df550631-cfd3-5799-aa47-c702e103b9e1'}}) 2025-09-08 00:40:20.043244 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eee7454c-3e15-5681-817b-16336d12a7fd'}}) 2025-09-08 00:40:20.043250 | orchestrator | 2025-09-08 00:40:20.043256 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-08 00:40:20.043262 | orchestrator | Monday 08 September 2025 00:40:16 +0000 (0:00:00.149) 0:00:39.182 ****** 2025-09-08 00:40:20.043268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df550631-cfd3-5799-aa47-c702e103b9e1'}})  2025-09-08 00:40:20.043277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eee7454c-3e15-5681-817b-16336d12a7fd'}})  2025-09-08 00:40:20.043283 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043288 | orchestrator | 2025-09-08 00:40:20.043294 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-08 00:40:20.043300 | orchestrator | Monday 08 September 2025 00:40:16 +0000 (0:00:00.155) 0:00:39.337 ****** 2025-09-08 00:40:20.043307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df550631-cfd3-5799-aa47-c702e103b9e1'}})  2025-09-08 00:40:20.043340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eee7454c-3e15-5681-817b-16336d12a7fd'}})  2025-09-08 00:40:20.043346 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043352 | orchestrator | 2025-09-08 00:40:20.043357 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-08 00:40:20.043363 | orchestrator | Monday 08 September 2025 00:40:17 +0000 (0:00:00.155) 0:00:39.493 ****** 2025-09-08 00:40:20.043369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df550631-cfd3-5799-aa47-c702e103b9e1'}})  2025-09-08 00:40:20.043392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eee7454c-3e15-5681-817b-16336d12a7fd'}})  2025-09-08 00:40:20.043399 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043405 | orchestrator | 2025-09-08 00:40:20.043410 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-08 00:40:20.043416 | orchestrator | Monday 08 September 2025 00:40:17 +0000 (0:00:00.143) 0:00:39.637 ****** 2025-09-08 00:40:20.043422 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:40:20.043428 | orchestrator | 2025-09-08 00:40:20.043433 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-08 00:40:20.043439 | orchestrator | Monday 08 September 2025 00:40:17 +0000 (0:00:00.136) 0:00:39.773 ****** 2025-09-08 00:40:20.043445 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:40:20.043451 | orchestrator | 2025-09-08 00:40:20.043457 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-08 00:40:20.043462 | orchestrator | Monday 08 September 2025 00:40:17 +0000 (0:00:00.136) 0:00:39.909 ****** 2025-09-08 00:40:20.043468 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043474 | orchestrator | 2025-09-08 00:40:20.043480 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-08 00:40:20.043485 | orchestrator | Monday 08 September 2025 00:40:17 +0000 (0:00:00.144) 0:00:40.053 ****** 2025-09-08 00:40:20.043491 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043497 | orchestrator | 2025-09-08 00:40:20.043503 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-08 00:40:20.043509 | orchestrator | Monday 08 September 2025 00:40:17 +0000 (0:00:00.122) 0:00:40.175 ****** 2025-09-08 00:40:20.043516 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043523 | orchestrator | 2025-09-08 00:40:20.043530 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-08 00:40:20.043537 | orchestrator | Monday 08 September 2025 00:40:17 +0000 (0:00:00.139) 0:00:40.315 ****** 2025-09-08 00:40:20.043544 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:40:20.043551 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:40:20.043559 | orchestrator |  "sdb": { 2025-09-08 00:40:20.043566 | orchestrator |  "osd_lvm_uuid": "df550631-cfd3-5799-aa47-c702e103b9e1" 2025-09-08 00:40:20.043588 | orchestrator |  }, 2025-09-08 00:40:20.043596 | orchestrator |  "sdc": { 2025-09-08 00:40:20.043603 | orchestrator |  "osd_lvm_uuid": "eee7454c-3e15-5681-817b-16336d12a7fd" 2025-09-08 00:40:20.043610 | orchestrator |  } 2025-09-08 00:40:20.043617 | orchestrator |  } 2025-09-08 00:40:20.043625 | orchestrator | } 2025-09-08 00:40:20.043632 | orchestrator | 2025-09-08 00:40:20.043639 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-08 00:40:20.043646 | orchestrator | Monday 08 September 2025 00:40:18 +0000 (0:00:00.588) 0:00:40.904 ****** 2025-09-08 00:40:20.043653 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043660 | orchestrator | 2025-09-08 00:40:20.043667 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-08 00:40:20.043674 | orchestrator | Monday 08 September 2025 00:40:18 +0000 (0:00:00.108) 0:00:41.012 ****** 2025-09-08 00:40:20.043681 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043688 | orchestrator | 2025-09-08 00:40:20.043695 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-08 00:40:20.043708 | orchestrator | Monday 08 September 2025 00:40:18 +0000 (0:00:00.254) 0:00:41.266 ****** 2025-09-08 00:40:20.043715 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:20.043722 | orchestrator | 2025-09-08 00:40:20.043728 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-08 00:40:20.043736 | orchestrator | Monday 08 September 2025 00:40:18 +0000 (0:00:00.129) 0:00:41.396 ****** 2025-09-08 00:40:20.043742 | orchestrator | changed: [testbed-node-5] => { 2025-09-08 00:40:20.043749 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-08 00:40:20.043756 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:40:20.043763 | orchestrator |  "sdb": { 2025-09-08 00:40:20.043769 | orchestrator |  "osd_lvm_uuid": "df550631-cfd3-5799-aa47-c702e103b9e1" 2025-09-08 00:40:20.043776 | orchestrator |  }, 2025-09-08 00:40:20.043783 | orchestrator |  "sdc": { 2025-09-08 00:40:20.043790 | orchestrator |  "osd_lvm_uuid": "eee7454c-3e15-5681-817b-16336d12a7fd" 2025-09-08 00:40:20.043811 | orchestrator |  } 2025-09-08 00:40:20.043817 | orchestrator |  }, 2025-09-08 00:40:20.043824 | orchestrator |  "lvm_volumes": [ 2025-09-08 00:40:20.043831 | orchestrator |  { 2025-09-08 00:40:20.043838 | orchestrator |  "data": "osd-block-df550631-cfd3-5799-aa47-c702e103b9e1", 2025-09-08 00:40:20.043845 | orchestrator |  "data_vg": "ceph-df550631-cfd3-5799-aa47-c702e103b9e1" 2025-09-08 00:40:20.043852 | orchestrator |  }, 2025-09-08 00:40:20.043859 | orchestrator |  { 2025-09-08 00:40:20.043865 | orchestrator |  "data": "osd-block-eee7454c-3e15-5681-817b-16336d12a7fd", 2025-09-08 00:40:20.043871 | orchestrator |  "data_vg": "ceph-eee7454c-3e15-5681-817b-16336d12a7fd" 2025-09-08 00:40:20.043877 | orchestrator |  } 2025-09-08 00:40:20.043883 | orchestrator |  ] 2025-09-08 00:40:20.043889 | orchestrator |  } 2025-09-08 00:40:20.043898 | orchestrator | } 2025-09-08 00:40:20.043904 | orchestrator | 2025-09-08 00:40:20.043910 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-08 00:40:20.043915 | orchestrator | Monday 08 September 2025 00:40:19 +0000 (0:00:00.198) 0:00:41.595 ****** 2025-09-08 00:40:20.043921 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-08 00:40:20.043927 | orchestrator | 2025-09-08 00:40:20.043933 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:40:20.043939 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 00:40:20.043946 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 00:40:20.043952 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 00:40:20.043958 | orchestrator | 2025-09-08 00:40:20.043963 | orchestrator | 2025-09-08 00:40:20.043969 | orchestrator | 2025-09-08 00:40:20.043975 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:40:20.043981 | orchestrator | Monday 08 September 2025 00:40:20 +0000 (0:00:00.828) 0:00:42.423 ****** 2025-09-08 00:40:20.043986 | orchestrator | =============================================================================== 2025-09-08 00:40:20.043992 | orchestrator | Write configuration file ------------------------------------------------ 4.12s 2025-09-08 00:40:20.043998 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-09-08 00:40:20.044004 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-09-08 00:40:20.044009 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2025-09-08 00:40:20.044015 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2025-09-08 00:40:20.044025 | orchestrator | Get initial list of available block devices ----------------------------- 1.03s 2025-09-08 00:40:20.044031 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.93s 2025-09-08 00:40:20.044037 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.85s 2025-09-08 00:40:20.044042 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-09-08 00:40:20.044048 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-09-08 00:40:20.044054 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-09-08 00:40:20.044060 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-08 00:40:20.044065 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.71s 2025-09-08 00:40:20.044071 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-08 00:40:20.044081 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2025-09-08 00:40:20.264238 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-08 00:40:20.264346 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-09-08 00:40:20.264359 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-09-08 00:40:20.264371 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-09-08 00:40:20.264382 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-09-08 00:40:43.042727 | orchestrator | 2025-09-08 00:40:43 | INFO  | Task 013ac08f-a517-4cc9-aa13-f7d3a0b57424 (sync inventory) is running in background. Output coming soon. 2025-09-08 00:41:08.054344 | orchestrator | 2025-09-08 00:40:44 | INFO  | Starting group_vars file reorganization 2025-09-08 00:41:08.054470 | orchestrator | 2025-09-08 00:40:44 | INFO  | Moved 0 file(s) to their respective directories 2025-09-08 00:41:08.054486 | orchestrator | 2025-09-08 00:40:44 | INFO  | Group_vars file reorganization completed 2025-09-08 00:41:08.054498 | orchestrator | 2025-09-08 00:40:46 | INFO  | Starting variable preparation from inventory 2025-09-08 00:41:08.054509 | orchestrator | 2025-09-08 00:40:50 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-08 00:41:08.054521 | orchestrator | 2025-09-08 00:40:50 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-08 00:41:08.054532 | orchestrator | 2025-09-08 00:40:50 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-08 00:41:08.054566 | orchestrator | 2025-09-08 00:40:50 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-08 00:41:08.054579 | orchestrator | 2025-09-08 00:40:50 | INFO  | Variable preparation completed 2025-09-08 00:41:08.054590 | orchestrator | 2025-09-08 00:40:51 | INFO  | Starting inventory overwrite handling 2025-09-08 00:41:08.054601 | orchestrator | 2025-09-08 00:40:51 | INFO  | Handling group overwrites in 99-overwrite 2025-09-08 00:41:08.054617 | orchestrator | 2025-09-08 00:40:51 | INFO  | Removing group frr:children from 60-generic 2025-09-08 00:41:08.054629 | orchestrator | 2025-09-08 00:40:51 | INFO  | Removing group storage:children from 50-kolla 2025-09-08 00:41:08.054640 | orchestrator | 2025-09-08 00:40:51 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-08 00:41:08.054651 | orchestrator | 2025-09-08 00:40:51 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-08 00:41:08.054663 | orchestrator | 2025-09-08 00:40:51 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-08 00:41:08.054674 | orchestrator | 2025-09-08 00:40:51 | INFO  | Handling group overwrites in 20-roles 2025-09-08 00:41:08.054685 | orchestrator | 2025-09-08 00:40:51 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-08 00:41:08.054724 | orchestrator | 2025-09-08 00:40:51 | INFO  | Removed 6 group(s) in total 2025-09-08 00:41:08.054735 | orchestrator | 2025-09-08 00:40:51 | INFO  | Inventory overwrite handling completed 2025-09-08 00:41:08.054784 | orchestrator | 2025-09-08 00:40:52 | INFO  | Starting merge of inventory files 2025-09-08 00:41:08.054795 | orchestrator | 2025-09-08 00:40:52 | INFO  | Inventory files merged successfully 2025-09-08 00:41:08.054806 | orchestrator | 2025-09-08 00:40:56 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-08 00:41:08.054817 | orchestrator | 2025-09-08 00:41:06 | INFO  | Successfully wrote ClusterShell configuration 2025-09-08 00:41:08.054828 | orchestrator | [master 772b869] 2025-09-08-00-41 2025-09-08 00:41:08.054841 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-08 00:41:10.336212 | orchestrator | 2025-09-08 00:41:10 | INFO  | Task de9fb503-59f8-4e53-9243-db1fe5d79df4 (ceph-create-lvm-devices) was prepared for execution. 2025-09-08 00:41:10.336318 | orchestrator | 2025-09-08 00:41:10 | INFO  | It takes a moment until task de9fb503-59f8-4e53-9243-db1fe5d79df4 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-08 00:41:21.268891 | orchestrator | 2025-09-08 00:41:21.269017 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-08 00:41:21.269034 | orchestrator | 2025-09-08 00:41:21.269047 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:41:21.269059 | orchestrator | Monday 08 September 2025 00:41:14 +0000 (0:00:00.319) 0:00:00.319 ****** 2025-09-08 00:41:21.269071 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-08 00:41:21.269083 | orchestrator | 2025-09-08 00:41:21.269094 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:41:21.269105 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.240) 0:00:00.560 ****** 2025-09-08 00:41:21.269116 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:21.269128 | orchestrator | 2025-09-08 00:41:21.269139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269149 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.219) 0:00:00.780 ****** 2025-09-08 00:41:21.269160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:41:21.269172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:41:21.269183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:41:21.269194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:41:21.269204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:41:21.269215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:41:21.269225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:41:21.269236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:41:21.269246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-08 00:41:21.269257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:41:21.269267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:41:21.269278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:41:21.269288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:41:21.269299 | orchestrator | 2025-09-08 00:41:21.269310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269345 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.372) 0:00:01.152 ****** 2025-09-08 00:41:21.269356 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269367 | orchestrator | 2025-09-08 00:41:21.269378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269392 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.368) 0:00:01.520 ****** 2025-09-08 00:41:21.269405 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269417 | orchestrator | 2025-09-08 00:41:21.269430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269442 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.206) 0:00:01.727 ****** 2025-09-08 00:41:21.269454 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269467 | orchestrator | 2025-09-08 00:41:21.269479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269493 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.172) 0:00:01.899 ****** 2025-09-08 00:41:21.269505 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269517 | orchestrator | 2025-09-08 00:41:21.269530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269542 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.165) 0:00:02.065 ****** 2025-09-08 00:41:21.269555 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269567 | orchestrator | 2025-09-08 00:41:21.269579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269592 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.163) 0:00:02.229 ****** 2025-09-08 00:41:21.269604 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269617 | orchestrator | 2025-09-08 00:41:21.269629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269642 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.169) 0:00:02.398 ****** 2025-09-08 00:41:21.269655 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269667 | orchestrator | 2025-09-08 00:41:21.269679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269691 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.167) 0:00:02.566 ****** 2025-09-08 00:41:21.269704 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.269716 | orchestrator | 2025-09-08 00:41:21.269751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269763 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.158) 0:00:02.725 ****** 2025-09-08 00:41:21.269774 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691) 2025-09-08 00:41:21.269785 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691) 2025-09-08 00:41:21.269796 | orchestrator | 2025-09-08 00:41:21.269807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269818 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.362) 0:00:03.088 ****** 2025-09-08 00:41:21.269848 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733) 2025-09-08 00:41:21.269860 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733) 2025-09-08 00:41:21.269870 | orchestrator | 2025-09-08 00:41:21.269881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269892 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.354) 0:00:03.442 ****** 2025-09-08 00:41:21.269902 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1) 2025-09-08 00:41:21.269913 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1) 2025-09-08 00:41:21.269924 | orchestrator | 2025-09-08 00:41:21.269934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.269957 | orchestrator | Monday 08 September 2025 00:41:18 +0000 (0:00:00.522) 0:00:03.965 ****** 2025-09-08 00:41:21.269967 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41) 2025-09-08 00:41:21.269978 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41) 2025-09-08 00:41:21.269989 | orchestrator | 2025-09-08 00:41:21.269999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:21.270010 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.729) 0:00:04.695 ****** 2025-09-08 00:41:21.270079 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:41:21.270091 | orchestrator | 2025-09-08 00:41:21.270101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270112 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.275) 0:00:04.970 ****** 2025-09-08 00:41:21.270122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:41:21.270133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:41:21.270143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:41:21.270154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:41:21.270184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:41:21.270196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:41:21.270206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:41:21.270217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:41:21.270228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-08 00:41:21.270238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:41:21.270249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:41:21.270259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:41:21.270275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:41:21.270286 | orchestrator | 2025-09-08 00:41:21.270297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270308 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.297) 0:00:05.268 ****** 2025-09-08 00:41:21.270319 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.270330 | orchestrator | 2025-09-08 00:41:21.270340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270351 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.194) 0:00:05.463 ****** 2025-09-08 00:41:21.270362 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.270372 | orchestrator | 2025-09-08 00:41:21.270383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270394 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.171) 0:00:05.634 ****** 2025-09-08 00:41:21.270405 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.270415 | orchestrator | 2025-09-08 00:41:21.270426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270437 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.174) 0:00:05.809 ****** 2025-09-08 00:41:21.270447 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.270458 | orchestrator | 2025-09-08 00:41:21.270469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270487 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.166) 0:00:05.975 ****** 2025-09-08 00:41:21.270498 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.270509 | orchestrator | 2025-09-08 00:41:21.270519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270530 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.201) 0:00:06.177 ****** 2025-09-08 00:41:21.270541 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.270552 | orchestrator | 2025-09-08 00:41:21.270563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270573 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.197) 0:00:06.375 ****** 2025-09-08 00:41:21.270584 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:21.270594 | orchestrator | 2025-09-08 00:41:21.270605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:21.270616 | orchestrator | Monday 08 September 2025 00:41:21 +0000 (0:00:00.205) 0:00:06.580 ****** 2025-09-08 00:41:21.270634 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.413257 | orchestrator | 2025-09-08 00:41:29.413391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:29.413409 | orchestrator | Monday 08 September 2025 00:41:21 +0000 (0:00:00.160) 0:00:06.740 ****** 2025-09-08 00:41:29.413420 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-08 00:41:29.413431 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-08 00:41:29.413442 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-08 00:41:29.413451 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-08 00:41:29.413461 | orchestrator | 2025-09-08 00:41:29.413471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:29.413481 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.809) 0:00:07.550 ****** 2025-09-08 00:41:29.413491 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.413501 | orchestrator | 2025-09-08 00:41:29.413510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:29.413520 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.190) 0:00:07.740 ****** 2025-09-08 00:41:29.413530 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.413539 | orchestrator | 2025-09-08 00:41:29.413549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:29.413558 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.210) 0:00:07.951 ****** 2025-09-08 00:41:29.413568 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.413577 | orchestrator | 2025-09-08 00:41:29.413587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:29.413597 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.204) 0:00:08.156 ****** 2025-09-08 00:41:29.413606 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.413616 | orchestrator | 2025-09-08 00:41:29.413625 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-08 00:41:29.413635 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.249) 0:00:08.405 ****** 2025-09-08 00:41:29.413644 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.413654 | orchestrator | 2025-09-08 00:41:29.413663 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-08 00:41:29.413672 | orchestrator | Monday 08 September 2025 00:41:23 +0000 (0:00:00.138) 0:00:08.544 ****** 2025-09-08 00:41:29.413683 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6b18b724-0587-5812-9148-41071cea985b'}}) 2025-09-08 00:41:29.413693 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b42feaf-b3bc-5f68-b3eb-37674b93132b'}}) 2025-09-08 00:41:29.413702 | orchestrator | 2025-09-08 00:41:29.413712 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-08 00:41:29.413771 | orchestrator | Monday 08 September 2025 00:41:23 +0000 (0:00:00.222) 0:00:08.766 ****** 2025-09-08 00:41:29.413786 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'}) 2025-09-08 00:41:29.413827 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'}) 2025-09-08 00:41:29.413838 | orchestrator | 2025-09-08 00:41:29.413850 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-08 00:41:29.413862 | orchestrator | Monday 08 September 2025 00:41:25 +0000 (0:00:01.997) 0:00:10.764 ****** 2025-09-08 00:41:29.413873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.413886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.413897 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.413908 | orchestrator | 2025-09-08 00:41:29.413919 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-08 00:41:29.413930 | orchestrator | Monday 08 September 2025 00:41:25 +0000 (0:00:00.181) 0:00:10.945 ****** 2025-09-08 00:41:29.413941 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'}) 2025-09-08 00:41:29.413953 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'}) 2025-09-08 00:41:29.413964 | orchestrator | 2025-09-08 00:41:29.413974 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-08 00:41:29.413986 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:01.668) 0:00:12.614 ****** 2025-09-08 00:41:29.413997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.414010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.414070 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414081 | orchestrator | 2025-09-08 00:41:29.414092 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-08 00:41:29.414103 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.157) 0:00:12.772 ****** 2025-09-08 00:41:29.414114 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414125 | orchestrator | 2025-09-08 00:41:29.414134 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-08 00:41:29.414163 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.130) 0:00:12.902 ****** 2025-09-08 00:41:29.414173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.414183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.414193 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414202 | orchestrator | 2025-09-08 00:41:29.414212 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-08 00:41:29.414222 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.383) 0:00:13.285 ****** 2025-09-08 00:41:29.414231 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414241 | orchestrator | 2025-09-08 00:41:29.414251 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-08 00:41:29.414260 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.132) 0:00:13.418 ****** 2025-09-08 00:41:29.414270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.414288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.414298 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414307 | orchestrator | 2025-09-08 00:41:29.414317 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-08 00:41:29.414327 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.164) 0:00:13.582 ****** 2025-09-08 00:41:29.414336 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414346 | orchestrator | 2025-09-08 00:41:29.414356 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-08 00:41:29.414365 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.160) 0:00:13.743 ****** 2025-09-08 00:41:29.414375 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.414385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.414394 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414404 | orchestrator | 2025-09-08 00:41:29.414414 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-08 00:41:29.414423 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.198) 0:00:13.942 ****** 2025-09-08 00:41:29.414433 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:29.414443 | orchestrator | 2025-09-08 00:41:29.414453 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-08 00:41:29.414462 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.153) 0:00:14.095 ****** 2025-09-08 00:41:29.414497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.414507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.414517 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414527 | orchestrator | 2025-09-08 00:41:29.414536 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-08 00:41:29.414546 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.162) 0:00:14.257 ****** 2025-09-08 00:41:29.414556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.414565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.414575 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414584 | orchestrator | 2025-09-08 00:41:29.414594 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-08 00:41:29.414604 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.181) 0:00:14.439 ****** 2025-09-08 00:41:29.414613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:29.414623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:29.414633 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414642 | orchestrator | 2025-09-08 00:41:29.414652 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-08 00:41:29.414661 | orchestrator | Monday 08 September 2025 00:41:29 +0000 (0:00:00.176) 0:00:14.616 ****** 2025-09-08 00:41:29.414671 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414687 | orchestrator | 2025-09-08 00:41:29.414696 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-08 00:41:29.414706 | orchestrator | Monday 08 September 2025 00:41:29 +0000 (0:00:00.126) 0:00:14.743 ****** 2025-09-08 00:41:29.414716 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:29.414744 | orchestrator | 2025-09-08 00:41:29.414760 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-08 00:41:36.598357 | orchestrator | Monday 08 September 2025 00:41:29 +0000 (0:00:00.137) 0:00:14.880 ****** 2025-09-08 00:41:36.598470 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.598487 | orchestrator | 2025-09-08 00:41:36.598499 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-08 00:41:36.598511 | orchestrator | Monday 08 September 2025 00:41:29 +0000 (0:00:00.165) 0:00:15.045 ****** 2025-09-08 00:41:36.598522 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:36.598533 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-08 00:41:36.598545 | orchestrator | } 2025-09-08 00:41:36.598556 | orchestrator | 2025-09-08 00:41:36.598567 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-08 00:41:36.598578 | orchestrator | Monday 08 September 2025 00:41:29 +0000 (0:00:00.413) 0:00:15.459 ****** 2025-09-08 00:41:36.598589 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:36.598600 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-08 00:41:36.598610 | orchestrator | } 2025-09-08 00:41:36.598622 | orchestrator | 2025-09-08 00:41:36.598632 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-08 00:41:36.598643 | orchestrator | Monday 08 September 2025 00:41:30 +0000 (0:00:00.177) 0:00:15.636 ****** 2025-09-08 00:41:36.598654 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:36.598665 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-08 00:41:36.598676 | orchestrator | } 2025-09-08 00:41:36.598688 | orchestrator | 2025-09-08 00:41:36.598699 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-08 00:41:36.598710 | orchestrator | Monday 08 September 2025 00:41:30 +0000 (0:00:00.180) 0:00:15.817 ****** 2025-09-08 00:41:36.598781 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:36.598793 | orchestrator | 2025-09-08 00:41:36.598804 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-08 00:41:36.598815 | orchestrator | Monday 08 September 2025 00:41:31 +0000 (0:00:00.700) 0:00:16.517 ****** 2025-09-08 00:41:36.598826 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:36.598837 | orchestrator | 2025-09-08 00:41:36.598848 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-08 00:41:36.598859 | orchestrator | Monday 08 September 2025 00:41:31 +0000 (0:00:00.536) 0:00:17.053 ****** 2025-09-08 00:41:36.598870 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:36.598882 | orchestrator | 2025-09-08 00:41:36.598896 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-08 00:41:36.598908 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:00.580) 0:00:17.633 ****** 2025-09-08 00:41:36.598920 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:36.598932 | orchestrator | 2025-09-08 00:41:36.598945 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-08 00:41:36.598957 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:00.167) 0:00:17.801 ****** 2025-09-08 00:41:36.598970 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.598982 | orchestrator | 2025-09-08 00:41:36.598994 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-08 00:41:36.599006 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:00.144) 0:00:17.945 ****** 2025-09-08 00:41:36.599018 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599030 | orchestrator | 2025-09-08 00:41:36.599043 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-08 00:41:36.599056 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:00.121) 0:00:18.067 ****** 2025-09-08 00:41:36.599068 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:36.599104 | orchestrator |  "vgs_report": { 2025-09-08 00:41:36.599134 | orchestrator |  "vg": [] 2025-09-08 00:41:36.599146 | orchestrator |  } 2025-09-08 00:41:36.599160 | orchestrator | } 2025-09-08 00:41:36.599173 | orchestrator | 2025-09-08 00:41:36.599185 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-08 00:41:36.599197 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:00.146) 0:00:18.214 ****** 2025-09-08 00:41:36.599209 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599222 | orchestrator | 2025-09-08 00:41:36.599235 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-08 00:41:36.599246 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:00.166) 0:00:18.380 ****** 2025-09-08 00:41:36.599257 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599267 | orchestrator | 2025-09-08 00:41:36.599278 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-08 00:41:36.599289 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:00.165) 0:00:18.546 ****** 2025-09-08 00:41:36.599299 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599310 | orchestrator | 2025-09-08 00:41:36.599321 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-08 00:41:36.599331 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:00.346) 0:00:18.892 ****** 2025-09-08 00:41:36.599342 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599353 | orchestrator | 2025-09-08 00:41:36.599363 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-08 00:41:36.599374 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:00.196) 0:00:19.089 ****** 2025-09-08 00:41:36.599384 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599395 | orchestrator | 2025-09-08 00:41:36.599406 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-08 00:41:36.599416 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:00.165) 0:00:19.255 ****** 2025-09-08 00:41:36.599427 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599438 | orchestrator | 2025-09-08 00:41:36.599448 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-08 00:41:36.599459 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:00.112) 0:00:19.367 ****** 2025-09-08 00:41:36.599469 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599480 | orchestrator | 2025-09-08 00:41:36.599491 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-08 00:41:36.599501 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.182) 0:00:19.550 ****** 2025-09-08 00:41:36.599512 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599522 | orchestrator | 2025-09-08 00:41:36.599533 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-08 00:41:36.599564 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.190) 0:00:19.740 ****** 2025-09-08 00:41:36.599575 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599586 | orchestrator | 2025-09-08 00:41:36.599597 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-08 00:41:36.599607 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.171) 0:00:19.912 ****** 2025-09-08 00:41:36.599618 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599628 | orchestrator | 2025-09-08 00:41:36.599639 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-08 00:41:36.599649 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.162) 0:00:20.074 ****** 2025-09-08 00:41:36.599660 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599670 | orchestrator | 2025-09-08 00:41:36.599681 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-08 00:41:36.599692 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.186) 0:00:20.261 ****** 2025-09-08 00:41:36.599702 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599736 | orchestrator | 2025-09-08 00:41:36.599758 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-08 00:41:36.599769 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.147) 0:00:20.408 ****** 2025-09-08 00:41:36.599780 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599791 | orchestrator | 2025-09-08 00:41:36.599801 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-08 00:41:36.599812 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.147) 0:00:20.555 ****** 2025-09-08 00:41:36.599823 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599833 | orchestrator | 2025-09-08 00:41:36.599844 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-08 00:41:36.599855 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.163) 0:00:20.718 ****** 2025-09-08 00:41:36.599867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:36.599879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:36.599890 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599901 | orchestrator | 2025-09-08 00:41:36.599911 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-08 00:41:36.599922 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.410) 0:00:21.129 ****** 2025-09-08 00:41:36.599933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:36.599944 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:36.599954 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.599965 | orchestrator | 2025-09-08 00:41:36.599976 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-08 00:41:36.599986 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.204) 0:00:21.334 ****** 2025-09-08 00:41:36.599997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:36.600008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:36.600019 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.600029 | orchestrator | 2025-09-08 00:41:36.600040 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-08 00:41:36.600050 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.187) 0:00:21.521 ****** 2025-09-08 00:41:36.600061 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:36.600072 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:36.600083 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.600093 | orchestrator | 2025-09-08 00:41:36.600104 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-08 00:41:36.600115 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.164) 0:00:21.686 ****** 2025-09-08 00:41:36.600125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:36.600136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:36.600147 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:36.600164 | orchestrator | 2025-09-08 00:41:36.600175 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-08 00:41:36.600186 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.184) 0:00:21.871 ****** 2025-09-08 00:41:36.600204 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:36.600223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:42.539277 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:42.539399 | orchestrator | 2025-09-08 00:41:42.539417 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-08 00:41:42.539431 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.199) 0:00:22.070 ****** 2025-09-08 00:41:42.539443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:42.539457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:42.539468 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:42.539479 | orchestrator | 2025-09-08 00:41:42.539490 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-08 00:41:42.539501 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.188) 0:00:22.258 ****** 2025-09-08 00:41:42.539512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:42.539523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:42.539533 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:42.539545 | orchestrator | 2025-09-08 00:41:42.539556 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-08 00:41:42.539567 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.158) 0:00:22.417 ****** 2025-09-08 00:41:42.539578 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:42.539589 | orchestrator | 2025-09-08 00:41:42.539600 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-08 00:41:42.539611 | orchestrator | Monday 08 September 2025 00:41:37 +0000 (0:00:00.528) 0:00:22.946 ****** 2025-09-08 00:41:42.539622 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:42.539632 | orchestrator | 2025-09-08 00:41:42.539643 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-08 00:41:42.539654 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.555) 0:00:23.501 ****** 2025-09-08 00:41:42.539665 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:42.539675 | orchestrator | 2025-09-08 00:41:42.539686 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-08 00:41:42.539697 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.193) 0:00:23.695 ****** 2025-09-08 00:41:42.539741 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'vg_name': 'ceph-6b18b724-0587-5812-9148-41071cea985b'}) 2025-09-08 00:41:42.539754 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'vg_name': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'}) 2025-09-08 00:41:42.539766 | orchestrator | 2025-09-08 00:41:42.539793 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-08 00:41:42.539807 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.178) 0:00:23.874 ****** 2025-09-08 00:41:42.539820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:42.539859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:42.539872 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:42.539884 | orchestrator | 2025-09-08 00:41:42.539898 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-08 00:41:42.539910 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.401) 0:00:24.275 ****** 2025-09-08 00:41:42.539923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:42.539936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:42.539948 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:42.539961 | orchestrator | 2025-09-08 00:41:42.539973 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-08 00:41:42.539986 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.164) 0:00:24.439 ****** 2025-09-08 00:41:42.539999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'})  2025-09-08 00:41:42.540012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'})  2025-09-08 00:41:42.540024 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:42.540036 | orchestrator | 2025-09-08 00:41:42.540050 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-08 00:41:42.540062 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.171) 0:00:24.611 ****** 2025-09-08 00:41:42.540074 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:42.540087 | orchestrator |  "lvm_report": { 2025-09-08 00:41:42.540100 | orchestrator |  "lv": [ 2025-09-08 00:41:42.540113 | orchestrator |  { 2025-09-08 00:41:42.540142 | orchestrator |  "lv_name": "osd-block-6b18b724-0587-5812-9148-41071cea985b", 2025-09-08 00:41:42.540157 | orchestrator |  "vg_name": "ceph-6b18b724-0587-5812-9148-41071cea985b" 2025-09-08 00:41:42.540169 | orchestrator |  }, 2025-09-08 00:41:42.540180 | orchestrator |  { 2025-09-08 00:41:42.540191 | orchestrator |  "lv_name": "osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b", 2025-09-08 00:41:42.540202 | orchestrator |  "vg_name": "ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b" 2025-09-08 00:41:42.540212 | orchestrator |  } 2025-09-08 00:41:42.540223 | orchestrator |  ], 2025-09-08 00:41:42.540234 | orchestrator |  "pv": [ 2025-09-08 00:41:42.540244 | orchestrator |  { 2025-09-08 00:41:42.540255 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-08 00:41:42.540266 | orchestrator |  "vg_name": "ceph-6b18b724-0587-5812-9148-41071cea985b" 2025-09-08 00:41:42.540276 | orchestrator |  }, 2025-09-08 00:41:42.540287 | orchestrator |  { 2025-09-08 00:41:42.540297 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-08 00:41:42.540308 | orchestrator |  "vg_name": "ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b" 2025-09-08 00:41:42.540318 | orchestrator |  } 2025-09-08 00:41:42.540329 | orchestrator |  ] 2025-09-08 00:41:42.540340 | orchestrator |  } 2025-09-08 00:41:42.540351 | orchestrator | } 2025-09-08 00:41:42.540362 | orchestrator | 2025-09-08 00:41:42.540372 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-08 00:41:42.540383 | orchestrator | 2025-09-08 00:41:42.540393 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:41:42.540404 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.362) 0:00:24.973 ****** 2025-09-08 00:41:42.540415 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-08 00:41:42.540434 | orchestrator | 2025-09-08 00:41:42.540445 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:41:42.540456 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.317) 0:00:25.291 ****** 2025-09-08 00:41:42.540466 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:42.540477 | orchestrator | 2025-09-08 00:41:42.540488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.540498 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.254) 0:00:25.545 ****** 2025-09-08 00:41:42.540509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:41:42.540520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:41:42.540531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:41:42.540544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:41:42.540564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:41:42.540584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:41:42.540604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:41:42.540630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:41:42.540650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-08 00:41:42.540670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:41:42.540690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:41:42.540733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:41:42.540755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:41:42.540773 | orchestrator | 2025-09-08 00:41:42.540788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.540799 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.423) 0:00:25.969 ****** 2025-09-08 00:41:42.540810 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:42.540821 | orchestrator | 2025-09-08 00:41:42.540831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.540842 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.221) 0:00:26.191 ****** 2025-09-08 00:41:42.540852 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:42.540863 | orchestrator | 2025-09-08 00:41:42.540874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.540884 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.202) 0:00:26.394 ****** 2025-09-08 00:41:42.540895 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:42.540905 | orchestrator | 2025-09-08 00:41:42.540916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.540927 | orchestrator | Monday 08 September 2025 00:41:41 +0000 (0:00:00.749) 0:00:27.143 ****** 2025-09-08 00:41:42.540937 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:42.540947 | orchestrator | 2025-09-08 00:41:42.540958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.540969 | orchestrator | Monday 08 September 2025 00:41:41 +0000 (0:00:00.255) 0:00:27.398 ****** 2025-09-08 00:41:42.540979 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:42.540990 | orchestrator | 2025-09-08 00:41:42.541001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.541011 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.186) 0:00:27.585 ****** 2025-09-08 00:41:42.541021 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:42.541032 | orchestrator | 2025-09-08 00:41:42.541052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:42.541063 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.212) 0:00:27.797 ****** 2025-09-08 00:41:42.541074 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:42.541085 | orchestrator | 2025-09-08 00:41:42.541105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:53.206564 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.213) 0:00:28.011 ****** 2025-09-08 00:41:53.207673 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.207749 | orchestrator | 2025-09-08 00:41:53.207764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:53.207776 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.218) 0:00:28.229 ****** 2025-09-08 00:41:53.207788 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3) 2025-09-08 00:41:53.207800 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3) 2025-09-08 00:41:53.207810 | orchestrator | 2025-09-08 00:41:53.207821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:53.207832 | orchestrator | Monday 08 September 2025 00:41:43 +0000 (0:00:00.601) 0:00:28.831 ****** 2025-09-08 00:41:53.207843 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab) 2025-09-08 00:41:53.207853 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab) 2025-09-08 00:41:53.207864 | orchestrator | 2025-09-08 00:41:53.207875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:53.207885 | orchestrator | Monday 08 September 2025 00:41:43 +0000 (0:00:00.454) 0:00:29.286 ****** 2025-09-08 00:41:53.207896 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55) 2025-09-08 00:41:53.207907 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55) 2025-09-08 00:41:53.207917 | orchestrator | 2025-09-08 00:41:53.207928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:53.207939 | orchestrator | Monday 08 September 2025 00:41:44 +0000 (0:00:00.419) 0:00:29.706 ****** 2025-09-08 00:41:53.207949 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e) 2025-09-08 00:41:53.207960 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e) 2025-09-08 00:41:53.207971 | orchestrator | 2025-09-08 00:41:53.207981 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:53.207992 | orchestrator | Monday 08 September 2025 00:41:44 +0000 (0:00:00.431) 0:00:30.138 ****** 2025-09-08 00:41:53.208003 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:41:53.208013 | orchestrator | 2025-09-08 00:41:53.208024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208035 | orchestrator | Monday 08 September 2025 00:41:44 +0000 (0:00:00.325) 0:00:30.463 ****** 2025-09-08 00:41:53.208045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:41:53.208057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:41:53.208068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:41:53.208079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:41:53.208089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:41:53.208100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:41:53.208132 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:41:53.208166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:41:53.208177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-08 00:41:53.208188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:41:53.208199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:41:53.208209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:41:53.208220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:41:53.208230 | orchestrator | 2025-09-08 00:41:53.208241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208251 | orchestrator | Monday 08 September 2025 00:41:45 +0000 (0:00:00.653) 0:00:31.116 ****** 2025-09-08 00:41:53.208262 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208273 | orchestrator | 2025-09-08 00:41:53.208283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208294 | orchestrator | Monday 08 September 2025 00:41:45 +0000 (0:00:00.220) 0:00:31.337 ****** 2025-09-08 00:41:53.208304 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208315 | orchestrator | 2025-09-08 00:41:53.208326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208337 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.220) 0:00:31.557 ****** 2025-09-08 00:41:53.208347 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208358 | orchestrator | 2025-09-08 00:41:53.208368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208379 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.251) 0:00:31.808 ****** 2025-09-08 00:41:53.208389 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208400 | orchestrator | 2025-09-08 00:41:53.208431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208443 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.222) 0:00:32.031 ****** 2025-09-08 00:41:53.208454 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208465 | orchestrator | 2025-09-08 00:41:53.208475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208486 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.220) 0:00:32.252 ****** 2025-09-08 00:41:53.208496 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208507 | orchestrator | 2025-09-08 00:41:53.208517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208528 | orchestrator | Monday 08 September 2025 00:41:47 +0000 (0:00:00.248) 0:00:32.501 ****** 2025-09-08 00:41:53.208538 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208549 | orchestrator | 2025-09-08 00:41:53.208559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208570 | orchestrator | Monday 08 September 2025 00:41:47 +0000 (0:00:00.201) 0:00:32.702 ****** 2025-09-08 00:41:53.208580 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208591 | orchestrator | 2025-09-08 00:41:53.208601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208612 | orchestrator | Monday 08 September 2025 00:41:47 +0000 (0:00:00.205) 0:00:32.908 ****** 2025-09-08 00:41:53.208623 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-08 00:41:53.208633 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-08 00:41:53.208644 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-08 00:41:53.208655 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-08 00:41:53.208665 | orchestrator | 2025-09-08 00:41:53.208677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208687 | orchestrator | Monday 08 September 2025 00:41:48 +0000 (0:00:00.876) 0:00:33.785 ****** 2025-09-08 00:41:53.208728 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208749 | orchestrator | 2025-09-08 00:41:53.208767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208785 | orchestrator | Monday 08 September 2025 00:41:48 +0000 (0:00:00.212) 0:00:33.998 ****** 2025-09-08 00:41:53.208804 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208824 | orchestrator | 2025-09-08 00:41:53.208845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208919 | orchestrator | Monday 08 September 2025 00:41:48 +0000 (0:00:00.210) 0:00:34.208 ****** 2025-09-08 00:41:53.208934 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.208945 | orchestrator | 2025-09-08 00:41:53.208963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:53.208980 | orchestrator | Monday 08 September 2025 00:41:49 +0000 (0:00:00.641) 0:00:34.850 ****** 2025-09-08 00:41:53.208998 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.209016 | orchestrator | 2025-09-08 00:41:53.209034 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-08 00:41:53.209051 | orchestrator | Monday 08 September 2025 00:41:49 +0000 (0:00:00.236) 0:00:35.086 ****** 2025-09-08 00:41:53.209079 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.209141 | orchestrator | 2025-09-08 00:41:53.209156 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-08 00:41:53.209166 | orchestrator | Monday 08 September 2025 00:41:49 +0000 (0:00:00.145) 0:00:35.232 ****** 2025-09-08 00:41:53.209177 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}}) 2025-09-08 00:41:53.209189 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aa077d44-869a-533b-aa21-81dea0f926a7'}}) 2025-09-08 00:41:53.209199 | orchestrator | 2025-09-08 00:41:53.209210 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-08 00:41:53.209220 | orchestrator | Monday 08 September 2025 00:41:49 +0000 (0:00:00.214) 0:00:35.447 ****** 2025-09-08 00:41:53.209232 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}) 2025-09-08 00:41:53.209244 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'}) 2025-09-08 00:41:53.209254 | orchestrator | 2025-09-08 00:41:53.209265 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-08 00:41:53.209276 | orchestrator | Monday 08 September 2025 00:41:51 +0000 (0:00:01.787) 0:00:37.234 ****** 2025-09-08 00:41:53.209286 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:53.209298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:53.209309 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:53.209320 | orchestrator | 2025-09-08 00:41:53.209330 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-08 00:41:53.209341 | orchestrator | Monday 08 September 2025 00:41:51 +0000 (0:00:00.168) 0:00:37.403 ****** 2025-09-08 00:41:53.209351 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}) 2025-09-08 00:41:53.209362 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'}) 2025-09-08 00:41:53.209373 | orchestrator | 2025-09-08 00:41:53.209396 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-08 00:41:58.972080 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:01.270) 0:00:38.674 ****** 2025-09-08 00:41:58.972221 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:58.972239 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:58.972252 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972264 | orchestrator | 2025-09-08 00:41:58.972276 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-08 00:41:58.972287 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:00.146) 0:00:38.820 ****** 2025-09-08 00:41:58.972298 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972309 | orchestrator | 2025-09-08 00:41:58.972320 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-08 00:41:58.972331 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:00.144) 0:00:38.965 ****** 2025-09-08 00:41:58.972343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:58.972354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:58.972364 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972375 | orchestrator | 2025-09-08 00:41:58.972386 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-08 00:41:58.972397 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:00.182) 0:00:39.148 ****** 2025-09-08 00:41:58.972408 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972418 | orchestrator | 2025-09-08 00:41:58.972429 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-08 00:41:58.972440 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:00.140) 0:00:39.289 ****** 2025-09-08 00:41:58.972451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:58.972462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:58.972472 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972483 | orchestrator | 2025-09-08 00:41:58.972494 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-08 00:41:58.972505 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:00.156) 0:00:39.445 ****** 2025-09-08 00:41:58.972532 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972543 | orchestrator | 2025-09-08 00:41:58.972554 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-08 00:41:58.972565 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.364) 0:00:39.809 ****** 2025-09-08 00:41:58.972575 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:58.972586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:58.972597 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972608 | orchestrator | 2025-09-08 00:41:58.972619 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-08 00:41:58.972629 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.164) 0:00:39.974 ****** 2025-09-08 00:41:58.972640 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:58.972651 | orchestrator | 2025-09-08 00:41:58.972662 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-08 00:41:58.972673 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.149) 0:00:40.123 ****** 2025-09-08 00:41:58.972721 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:58.972734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:58.972745 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972756 | orchestrator | 2025-09-08 00:41:58.972767 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-08 00:41:58.972777 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.164) 0:00:40.288 ****** 2025-09-08 00:41:58.972788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:58.972799 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:58.972810 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972820 | orchestrator | 2025-09-08 00:41:58.972831 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-08 00:41:58.972842 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.181) 0:00:40.470 ****** 2025-09-08 00:41:58.972871 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:41:58.972883 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:41:58.972894 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972905 | orchestrator | 2025-09-08 00:41:58.972916 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-08 00:41:58.972927 | orchestrator | Monday 08 September 2025 00:41:55 +0000 (0:00:00.160) 0:00:40.630 ****** 2025-09-08 00:41:58.972937 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972948 | orchestrator | 2025-09-08 00:41:58.972959 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-08 00:41:58.972970 | orchestrator | Monday 08 September 2025 00:41:55 +0000 (0:00:00.142) 0:00:40.772 ****** 2025-09-08 00:41:58.972980 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.972991 | orchestrator | 2025-09-08 00:41:58.973002 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-08 00:41:58.973013 | orchestrator | Monday 08 September 2025 00:41:55 +0000 (0:00:00.144) 0:00:40.916 ****** 2025-09-08 00:41:58.973023 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.973034 | orchestrator | 2025-09-08 00:41:58.973045 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-08 00:41:58.973056 | orchestrator | Monday 08 September 2025 00:41:55 +0000 (0:00:00.145) 0:00:41.062 ****** 2025-09-08 00:41:58.973066 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:58.973077 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-08 00:41:58.973088 | orchestrator | } 2025-09-08 00:41:58.973099 | orchestrator | 2025-09-08 00:41:58.973110 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-08 00:41:58.973121 | orchestrator | Monday 08 September 2025 00:41:55 +0000 (0:00:00.143) 0:00:41.206 ****** 2025-09-08 00:41:58.973132 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:58.973142 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-08 00:41:58.973153 | orchestrator | } 2025-09-08 00:41:58.973164 | orchestrator | 2025-09-08 00:41:58.973175 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-08 00:41:58.973185 | orchestrator | Monday 08 September 2025 00:41:55 +0000 (0:00:00.156) 0:00:41.362 ****** 2025-09-08 00:41:58.973196 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:58.973207 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-08 00:41:58.973226 | orchestrator | } 2025-09-08 00:41:58.973237 | orchestrator | 2025-09-08 00:41:58.973248 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-08 00:41:58.973259 | orchestrator | Monday 08 September 2025 00:41:56 +0000 (0:00:00.167) 0:00:41.529 ****** 2025-09-08 00:41:58.973270 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:58.973280 | orchestrator | 2025-09-08 00:41:58.973291 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-08 00:41:58.973302 | orchestrator | Monday 08 September 2025 00:41:56 +0000 (0:00:00.752) 0:00:42.282 ****** 2025-09-08 00:41:58.973313 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:58.973324 | orchestrator | 2025-09-08 00:41:58.973335 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-08 00:41:58.973346 | orchestrator | Monday 08 September 2025 00:41:57 +0000 (0:00:00.516) 0:00:42.798 ****** 2025-09-08 00:41:58.973357 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:58.973368 | orchestrator | 2025-09-08 00:41:58.973379 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-08 00:41:58.973389 | orchestrator | Monday 08 September 2025 00:41:57 +0000 (0:00:00.503) 0:00:43.301 ****** 2025-09-08 00:41:58.973400 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:58.973411 | orchestrator | 2025-09-08 00:41:58.973422 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-08 00:41:58.973432 | orchestrator | Monday 08 September 2025 00:41:57 +0000 (0:00:00.166) 0:00:43.468 ****** 2025-09-08 00:41:58.973443 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.973454 | orchestrator | 2025-09-08 00:41:58.973465 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-08 00:41:58.973476 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.123) 0:00:43.592 ****** 2025-09-08 00:41:58.973494 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.973505 | orchestrator | 2025-09-08 00:41:58.973516 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-08 00:41:58.973527 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.126) 0:00:43.718 ****** 2025-09-08 00:41:58.973537 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:58.973548 | orchestrator |  "vgs_report": { 2025-09-08 00:41:58.973560 | orchestrator |  "vg": [] 2025-09-08 00:41:58.973571 | orchestrator |  } 2025-09-08 00:41:58.973582 | orchestrator | } 2025-09-08 00:41:58.973593 | orchestrator | 2025-09-08 00:41:58.973604 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-08 00:41:58.973615 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.155) 0:00:43.874 ****** 2025-09-08 00:41:58.973626 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.973637 | orchestrator | 2025-09-08 00:41:58.973647 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-08 00:41:58.973658 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.144) 0:00:44.019 ****** 2025-09-08 00:41:58.973669 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.973680 | orchestrator | 2025-09-08 00:41:58.973691 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-08 00:41:58.973719 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.142) 0:00:44.161 ****** 2025-09-08 00:41:58.973730 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.973740 | orchestrator | 2025-09-08 00:41:58.973751 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-08 00:41:58.973762 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.133) 0:00:44.295 ****** 2025-09-08 00:41:58.973773 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:58.973783 | orchestrator | 2025-09-08 00:41:58.973794 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-08 00:41:58.973811 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.146) 0:00:44.442 ****** 2025-09-08 00:42:03.993065 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993177 | orchestrator | 2025-09-08 00:42:03.993218 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-08 00:42:03.993231 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.138) 0:00:44.580 ****** 2025-09-08 00:42:03.993242 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993253 | orchestrator | 2025-09-08 00:42:03.993264 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-08 00:42:03.993275 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.345) 0:00:44.926 ****** 2025-09-08 00:42:03.993286 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993296 | orchestrator | 2025-09-08 00:42:03.993307 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-08 00:42:03.993318 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.178) 0:00:45.105 ****** 2025-09-08 00:42:03.993328 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993339 | orchestrator | 2025-09-08 00:42:03.993349 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-08 00:42:03.993360 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.157) 0:00:45.262 ****** 2025-09-08 00:42:03.993371 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993381 | orchestrator | 2025-09-08 00:42:03.993392 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-08 00:42:03.993402 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.141) 0:00:45.404 ****** 2025-09-08 00:42:03.993413 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993423 | orchestrator | 2025-09-08 00:42:03.993434 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-08 00:42:03.993444 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.186) 0:00:45.591 ****** 2025-09-08 00:42:03.993455 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993466 | orchestrator | 2025-09-08 00:42:03.993476 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-08 00:42:03.993487 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.133) 0:00:45.725 ****** 2025-09-08 00:42:03.993497 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993508 | orchestrator | 2025-09-08 00:42:03.993518 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-08 00:42:03.993529 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.147) 0:00:45.872 ****** 2025-09-08 00:42:03.993539 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993550 | orchestrator | 2025-09-08 00:42:03.993561 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-08 00:42:03.993571 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.133) 0:00:46.005 ****** 2025-09-08 00:42:03.993582 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993596 | orchestrator | 2025-09-08 00:42:03.993608 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-08 00:42:03.993621 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.139) 0:00:46.145 ****** 2025-09-08 00:42:03.993648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.993663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.993676 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993707 | orchestrator | 2025-09-08 00:42:03.993721 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-08 00:42:03.993734 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.149) 0:00:46.294 ****** 2025-09-08 00:42:03.993747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.993759 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.993784 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993797 | orchestrator | 2025-09-08 00:42:03.993809 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-08 00:42:03.993822 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.162) 0:00:46.457 ****** 2025-09-08 00:42:03.993834 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.993846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.993859 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993872 | orchestrator | 2025-09-08 00:42:03.993885 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-08 00:42:03.993898 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.174) 0:00:46.631 ****** 2025-09-08 00:42:03.993911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.993924 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.993937 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.993948 | orchestrator | 2025-09-08 00:42:03.993959 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-08 00:42:03.993986 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.374) 0:00:47.006 ****** 2025-09-08 00:42:03.993998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.994009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.994064 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.994077 | orchestrator | 2025-09-08 00:42:03.994088 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-08 00:42:03.994099 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.184) 0:00:47.191 ****** 2025-09-08 00:42:03.994109 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.994120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.994131 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.994142 | orchestrator | 2025-09-08 00:42:03.994153 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-08 00:42:03.994164 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.201) 0:00:47.392 ****** 2025-09-08 00:42:03.994175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.994186 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.994196 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.994207 | orchestrator | 2025-09-08 00:42:03.994218 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-08 00:42:03.994228 | orchestrator | Monday 08 September 2025 00:42:02 +0000 (0:00:00.160) 0:00:47.553 ****** 2025-09-08 00:42:03.994239 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.994257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.994268 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.994278 | orchestrator | 2025-09-08 00:42:03.994295 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-08 00:42:03.994305 | orchestrator | Monday 08 September 2025 00:42:02 +0000 (0:00:00.235) 0:00:47.788 ****** 2025-09-08 00:42:03.994316 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:42:03.994327 | orchestrator | 2025-09-08 00:42:03.994338 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-08 00:42:03.994349 | orchestrator | Monday 08 September 2025 00:42:02 +0000 (0:00:00.508) 0:00:48.296 ****** 2025-09-08 00:42:03.994359 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:42:03.994370 | orchestrator | 2025-09-08 00:42:03.994380 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-08 00:42:03.994391 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.507) 0:00:48.804 ****** 2025-09-08 00:42:03.994402 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:42:03.994412 | orchestrator | 2025-09-08 00:42:03.994423 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-08 00:42:03.994434 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.173) 0:00:48.977 ****** 2025-09-08 00:42:03.994444 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'vg_name': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'}) 2025-09-08 00:42:03.994456 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'vg_name': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}) 2025-09-08 00:42:03.994467 | orchestrator | 2025-09-08 00:42:03.994478 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-08 00:42:03.994488 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.173) 0:00:49.151 ****** 2025-09-08 00:42:03.994499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.994510 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.994521 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:03.994531 | orchestrator | 2025-09-08 00:42:03.994542 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-08 00:42:03.994552 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.160) 0:00:49.311 ****** 2025-09-08 00:42:03.994563 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:03.994574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:03.994592 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:10.839925 | orchestrator | 2025-09-08 00:42:10.840069 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-08 00:42:10.840087 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.151) 0:00:49.463 ****** 2025-09-08 00:42:10.840100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'})  2025-09-08 00:42:10.840114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'})  2025-09-08 00:42:10.840126 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:10.840138 | orchestrator | 2025-09-08 00:42:10.840150 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-08 00:42:10.840161 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.164) 0:00:49.628 ****** 2025-09-08 00:42:10.840202 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:42:10.840214 | orchestrator |  "lvm_report": { 2025-09-08 00:42:10.840228 | orchestrator |  "lv": [ 2025-09-08 00:42:10.840239 | orchestrator |  { 2025-09-08 00:42:10.840250 | orchestrator |  "lv_name": "osd-block-aa077d44-869a-533b-aa21-81dea0f926a7", 2025-09-08 00:42:10.840263 | orchestrator |  "vg_name": "ceph-aa077d44-869a-533b-aa21-81dea0f926a7" 2025-09-08 00:42:10.840274 | orchestrator |  }, 2025-09-08 00:42:10.840284 | orchestrator |  { 2025-09-08 00:42:10.840296 | orchestrator |  "lv_name": "osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b", 2025-09-08 00:42:10.840307 | orchestrator |  "vg_name": "ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b" 2025-09-08 00:42:10.840317 | orchestrator |  } 2025-09-08 00:42:10.840328 | orchestrator |  ], 2025-09-08 00:42:10.840339 | orchestrator |  "pv": [ 2025-09-08 00:42:10.840350 | orchestrator |  { 2025-09-08 00:42:10.840361 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-08 00:42:10.840372 | orchestrator |  "vg_name": "ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b" 2025-09-08 00:42:10.840383 | orchestrator |  }, 2025-09-08 00:42:10.840394 | orchestrator |  { 2025-09-08 00:42:10.840405 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-08 00:42:10.840416 | orchestrator |  "vg_name": "ceph-aa077d44-869a-533b-aa21-81dea0f926a7" 2025-09-08 00:42:10.840429 | orchestrator |  } 2025-09-08 00:42:10.840443 | orchestrator |  ] 2025-09-08 00:42:10.840455 | orchestrator |  } 2025-09-08 00:42:10.840468 | orchestrator | } 2025-09-08 00:42:10.840481 | orchestrator | 2025-09-08 00:42:10.840494 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-08 00:42:10.840506 | orchestrator | 2025-09-08 00:42:10.840519 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:42:10.840531 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.486) 0:00:50.114 ****** 2025-09-08 00:42:10.840545 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-08 00:42:10.840558 | orchestrator | 2025-09-08 00:42:10.840570 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:42:10.840582 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.260) 0:00:50.375 ****** 2025-09-08 00:42:10.840595 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:10.840608 | orchestrator | 2025-09-08 00:42:10.840621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.840633 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.263) 0:00:50.638 ****** 2025-09-08 00:42:10.840646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:42:10.840658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:42:10.840671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:42:10.840707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:42:10.840721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:42:10.840733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:42:10.840746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:42:10.840758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:42:10.840771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-08 00:42:10.840783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:42:10.840794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:42:10.840813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:42:10.840824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:42:10.840835 | orchestrator | 2025-09-08 00:42:10.840846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.840857 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.421) 0:00:51.059 ****** 2025-09-08 00:42:10.840868 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.840883 | orchestrator | 2025-09-08 00:42:10.840895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.840906 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.213) 0:00:51.272 ****** 2025-09-08 00:42:10.840916 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.840927 | orchestrator | 2025-09-08 00:42:10.840938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.840970 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.209) 0:00:51.482 ****** 2025-09-08 00:42:10.840981 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.840992 | orchestrator | 2025-09-08 00:42:10.841003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841014 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.228) 0:00:51.711 ****** 2025-09-08 00:42:10.841025 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.841036 | orchestrator | 2025-09-08 00:42:10.841047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841058 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.258) 0:00:51.969 ****** 2025-09-08 00:42:10.841069 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.841080 | orchestrator | 2025-09-08 00:42:10.841149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841162 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.219) 0:00:52.188 ****** 2025-09-08 00:42:10.841173 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.841184 | orchestrator | 2025-09-08 00:42:10.841194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841205 | orchestrator | Monday 08 September 2025 00:42:07 +0000 (0:00:00.718) 0:00:52.907 ****** 2025-09-08 00:42:10.841216 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.841227 | orchestrator | 2025-09-08 00:42:10.841238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841248 | orchestrator | Monday 08 September 2025 00:42:07 +0000 (0:00:00.236) 0:00:53.143 ****** 2025-09-08 00:42:10.841259 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:10.841270 | orchestrator | 2025-09-08 00:42:10.841281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841291 | orchestrator | Monday 08 September 2025 00:42:07 +0000 (0:00:00.307) 0:00:53.451 ****** 2025-09-08 00:42:10.841302 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8) 2025-09-08 00:42:10.841315 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8) 2025-09-08 00:42:10.841326 | orchestrator | 2025-09-08 00:42:10.841336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841347 | orchestrator | Monday 08 September 2025 00:42:08 +0000 (0:00:00.487) 0:00:53.939 ****** 2025-09-08 00:42:10.841358 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989) 2025-09-08 00:42:10.841369 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989) 2025-09-08 00:42:10.841380 | orchestrator | 2025-09-08 00:42:10.841390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841401 | orchestrator | Monday 08 September 2025 00:42:08 +0000 (0:00:00.422) 0:00:54.361 ****** 2025-09-08 00:42:10.841424 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c) 2025-09-08 00:42:10.841436 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c) 2025-09-08 00:42:10.841447 | orchestrator | 2025-09-08 00:42:10.841458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841469 | orchestrator | Monday 08 September 2025 00:42:09 +0000 (0:00:00.607) 0:00:54.968 ****** 2025-09-08 00:42:10.841479 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9) 2025-09-08 00:42:10.841490 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9) 2025-09-08 00:42:10.841501 | orchestrator | 2025-09-08 00:42:10.841512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:42:10.841523 | orchestrator | Monday 08 September 2025 00:42:09 +0000 (0:00:00.442) 0:00:55.411 ****** 2025-09-08 00:42:10.841534 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:42:10.841544 | orchestrator | 2025-09-08 00:42:10.841555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:10.841566 | orchestrator | Monday 08 September 2025 00:42:10 +0000 (0:00:00.447) 0:00:55.858 ****** 2025-09-08 00:42:10.841576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:42:10.841587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:42:10.841598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:42:10.841609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:42:10.841619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:42:10.841630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:42:10.841641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:42:10.841651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:42:10.841662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-08 00:42:10.841673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:42:10.841701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:42:10.841720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:42:19.822748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:42:19.822880 | orchestrator | 2025-09-08 00:42:19.822898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.822911 | orchestrator | Monday 08 September 2025 00:42:10 +0000 (0:00:00.441) 0:00:56.300 ****** 2025-09-08 00:42:19.822922 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.822935 | orchestrator | 2025-09-08 00:42:19.822946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.822957 | orchestrator | Monday 08 September 2025 00:42:11 +0000 (0:00:00.202) 0:00:56.502 ****** 2025-09-08 00:42:19.822968 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.822980 | orchestrator | 2025-09-08 00:42:19.822990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823002 | orchestrator | Monday 08 September 2025 00:42:11 +0000 (0:00:00.205) 0:00:56.708 ****** 2025-09-08 00:42:19.823013 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823024 | orchestrator | 2025-09-08 00:42:19.823035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823075 | orchestrator | Monday 08 September 2025 00:42:12 +0000 (0:00:00.773) 0:00:57.481 ****** 2025-09-08 00:42:19.823086 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823097 | orchestrator | 2025-09-08 00:42:19.823108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823119 | orchestrator | Monday 08 September 2025 00:42:12 +0000 (0:00:00.244) 0:00:57.726 ****** 2025-09-08 00:42:19.823129 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823140 | orchestrator | 2025-09-08 00:42:19.823151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823161 | orchestrator | Monday 08 September 2025 00:42:12 +0000 (0:00:00.206) 0:00:57.932 ****** 2025-09-08 00:42:19.823172 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823182 | orchestrator | 2025-09-08 00:42:19.823193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823204 | orchestrator | Monday 08 September 2025 00:42:12 +0000 (0:00:00.206) 0:00:58.139 ****** 2025-09-08 00:42:19.823216 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823229 | orchestrator | 2025-09-08 00:42:19.823241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823254 | orchestrator | Monday 08 September 2025 00:42:12 +0000 (0:00:00.223) 0:00:58.362 ****** 2025-09-08 00:42:19.823266 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823279 | orchestrator | 2025-09-08 00:42:19.823291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823303 | orchestrator | Monday 08 September 2025 00:42:13 +0000 (0:00:00.210) 0:00:58.572 ****** 2025-09-08 00:42:19.823315 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-08 00:42:19.823328 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-08 00:42:19.823357 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-08 00:42:19.823370 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-08 00:42:19.823382 | orchestrator | 2025-09-08 00:42:19.823395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823407 | orchestrator | Monday 08 September 2025 00:42:13 +0000 (0:00:00.639) 0:00:59.212 ****** 2025-09-08 00:42:19.823419 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823432 | orchestrator | 2025-09-08 00:42:19.823444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823458 | orchestrator | Monday 08 September 2025 00:42:13 +0000 (0:00:00.198) 0:00:59.410 ****** 2025-09-08 00:42:19.823470 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823483 | orchestrator | 2025-09-08 00:42:19.823496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823508 | orchestrator | Monday 08 September 2025 00:42:14 +0000 (0:00:00.197) 0:00:59.608 ****** 2025-09-08 00:42:19.823521 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823533 | orchestrator | 2025-09-08 00:42:19.823545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:19.823558 | orchestrator | Monday 08 September 2025 00:42:14 +0000 (0:00:00.199) 0:00:59.807 ****** 2025-09-08 00:42:19.823570 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823584 | orchestrator | 2025-09-08 00:42:19.823594 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-08 00:42:19.823605 | orchestrator | Monday 08 September 2025 00:42:14 +0000 (0:00:00.197) 0:01:00.004 ****** 2025-09-08 00:42:19.823616 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823626 | orchestrator | 2025-09-08 00:42:19.823637 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-08 00:42:19.823648 | orchestrator | Monday 08 September 2025 00:42:14 +0000 (0:00:00.356) 0:01:00.361 ****** 2025-09-08 00:42:19.823659 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'df550631-cfd3-5799-aa47-c702e103b9e1'}}) 2025-09-08 00:42:19.823671 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eee7454c-3e15-5681-817b-16336d12a7fd'}}) 2025-09-08 00:42:19.823715 | orchestrator | 2025-09-08 00:42:19.823726 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-08 00:42:19.823737 | orchestrator | Monday 08 September 2025 00:42:15 +0000 (0:00:00.200) 0:01:00.561 ****** 2025-09-08 00:42:19.823749 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'}) 2025-09-08 00:42:19.823762 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'}) 2025-09-08 00:42:19.823773 | orchestrator | 2025-09-08 00:42:19.823784 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-08 00:42:19.823812 | orchestrator | Monday 08 September 2025 00:42:16 +0000 (0:00:01.800) 0:01:02.362 ****** 2025-09-08 00:42:19.823824 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:19.823837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:19.823848 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823858 | orchestrator | 2025-09-08 00:42:19.823869 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-08 00:42:19.823880 | orchestrator | Monday 08 September 2025 00:42:17 +0000 (0:00:00.151) 0:01:02.514 ****** 2025-09-08 00:42:19.823890 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'}) 2025-09-08 00:42:19.823901 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'}) 2025-09-08 00:42:19.823913 | orchestrator | 2025-09-08 00:42:19.823924 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-08 00:42:19.823935 | orchestrator | Monday 08 September 2025 00:42:18 +0000 (0:00:01.237) 0:01:03.752 ****** 2025-09-08 00:42:19.823946 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:19.823957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:19.823967 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.823978 | orchestrator | 2025-09-08 00:42:19.823989 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-08 00:42:19.824000 | orchestrator | Monday 08 September 2025 00:42:18 +0000 (0:00:00.153) 0:01:03.906 ****** 2025-09-08 00:42:19.824010 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.824021 | orchestrator | 2025-09-08 00:42:19.824032 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-08 00:42:19.824043 | orchestrator | Monday 08 September 2025 00:42:18 +0000 (0:00:00.142) 0:01:04.048 ****** 2025-09-08 00:42:19.824054 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:19.824071 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:19.824082 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.824093 | orchestrator | 2025-09-08 00:42:19.824104 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-08 00:42:19.824115 | orchestrator | Monday 08 September 2025 00:42:18 +0000 (0:00:00.144) 0:01:04.193 ****** 2025-09-08 00:42:19.824126 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.824145 | orchestrator | 2025-09-08 00:42:19.824156 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-08 00:42:19.824167 | orchestrator | Monday 08 September 2025 00:42:18 +0000 (0:00:00.134) 0:01:04.327 ****** 2025-09-08 00:42:19.824178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:19.824189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:19.824199 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.824210 | orchestrator | 2025-09-08 00:42:19.824221 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-08 00:42:19.824232 | orchestrator | Monday 08 September 2025 00:42:19 +0000 (0:00:00.155) 0:01:04.483 ****** 2025-09-08 00:42:19.824242 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.824253 | orchestrator | 2025-09-08 00:42:19.824264 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-08 00:42:19.824275 | orchestrator | Monday 08 September 2025 00:42:19 +0000 (0:00:00.136) 0:01:04.619 ****** 2025-09-08 00:42:19.824285 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:19.824296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:19.824307 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:19.824318 | orchestrator | 2025-09-08 00:42:19.824329 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-08 00:42:19.824340 | orchestrator | Monday 08 September 2025 00:42:19 +0000 (0:00:00.141) 0:01:04.760 ****** 2025-09-08 00:42:19.824350 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:19.824361 | orchestrator | 2025-09-08 00:42:19.824372 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-08 00:42:19.824383 | orchestrator | Monday 08 September 2025 00:42:19 +0000 (0:00:00.361) 0:01:05.122 ****** 2025-09-08 00:42:19.824401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:26.030662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:26.030838 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.030854 | orchestrator | 2025-09-08 00:42:26.030867 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-08 00:42:26.030880 | orchestrator | Monday 08 September 2025 00:42:19 +0000 (0:00:00.175) 0:01:05.297 ****** 2025-09-08 00:42:26.030892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:26.030903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:26.030914 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.030926 | orchestrator | 2025-09-08 00:42:26.030937 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-08 00:42:26.030948 | orchestrator | Monday 08 September 2025 00:42:19 +0000 (0:00:00.169) 0:01:05.467 ****** 2025-09-08 00:42:26.030959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:26.030970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:26.030981 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031017 | orchestrator | 2025-09-08 00:42:26.031029 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-08 00:42:26.031040 | orchestrator | Monday 08 September 2025 00:42:20 +0000 (0:00:00.160) 0:01:05.627 ****** 2025-09-08 00:42:26.031051 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031062 | orchestrator | 2025-09-08 00:42:26.031073 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-08 00:42:26.031084 | orchestrator | Monday 08 September 2025 00:42:20 +0000 (0:00:00.141) 0:01:05.769 ****** 2025-09-08 00:42:26.031094 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031105 | orchestrator | 2025-09-08 00:42:26.031116 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-08 00:42:26.031127 | orchestrator | Monday 08 September 2025 00:42:20 +0000 (0:00:00.150) 0:01:05.919 ****** 2025-09-08 00:42:26.031138 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031148 | orchestrator | 2025-09-08 00:42:26.031159 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-08 00:42:26.031170 | orchestrator | Monday 08 September 2025 00:42:20 +0000 (0:00:00.142) 0:01:06.061 ****** 2025-09-08 00:42:26.031181 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:26.031193 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-08 00:42:26.031204 | orchestrator | } 2025-09-08 00:42:26.031215 | orchestrator | 2025-09-08 00:42:26.031226 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-08 00:42:26.031237 | orchestrator | Monday 08 September 2025 00:42:20 +0000 (0:00:00.155) 0:01:06.217 ****** 2025-09-08 00:42:26.031248 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:26.031259 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-08 00:42:26.031269 | orchestrator | } 2025-09-08 00:42:26.031280 | orchestrator | 2025-09-08 00:42:26.031291 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-08 00:42:26.031302 | orchestrator | Monday 08 September 2025 00:42:20 +0000 (0:00:00.154) 0:01:06.371 ****** 2025-09-08 00:42:26.031313 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:26.031324 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-08 00:42:26.031335 | orchestrator | } 2025-09-08 00:42:26.031346 | orchestrator | 2025-09-08 00:42:26.031357 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-08 00:42:26.031368 | orchestrator | Monday 08 September 2025 00:42:21 +0000 (0:00:00.144) 0:01:06.516 ****** 2025-09-08 00:42:26.031379 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:26.031390 | orchestrator | 2025-09-08 00:42:26.031400 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-08 00:42:26.031411 | orchestrator | Monday 08 September 2025 00:42:21 +0000 (0:00:00.507) 0:01:07.024 ****** 2025-09-08 00:42:26.031422 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:26.031433 | orchestrator | 2025-09-08 00:42:26.031444 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-08 00:42:26.031455 | orchestrator | Monday 08 September 2025 00:42:22 +0000 (0:00:00.481) 0:01:07.505 ****** 2025-09-08 00:42:26.031466 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:26.031476 | orchestrator | 2025-09-08 00:42:26.031487 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-08 00:42:26.031498 | orchestrator | Monday 08 September 2025 00:42:22 +0000 (0:00:00.707) 0:01:08.212 ****** 2025-09-08 00:42:26.031509 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:26.031520 | orchestrator | 2025-09-08 00:42:26.031531 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-08 00:42:26.031542 | orchestrator | Monday 08 September 2025 00:42:22 +0000 (0:00:00.157) 0:01:08.370 ****** 2025-09-08 00:42:26.031553 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031564 | orchestrator | 2025-09-08 00:42:26.031575 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-08 00:42:26.031585 | orchestrator | Monday 08 September 2025 00:42:23 +0000 (0:00:00.121) 0:01:08.492 ****** 2025-09-08 00:42:26.031604 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031615 | orchestrator | 2025-09-08 00:42:26.031626 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-08 00:42:26.031637 | orchestrator | Monday 08 September 2025 00:42:23 +0000 (0:00:00.124) 0:01:08.616 ****** 2025-09-08 00:42:26.031648 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:26.031699 | orchestrator |  "vgs_report": { 2025-09-08 00:42:26.031713 | orchestrator |  "vg": [] 2025-09-08 00:42:26.031742 | orchestrator |  } 2025-09-08 00:42:26.031754 | orchestrator | } 2025-09-08 00:42:26.031765 | orchestrator | 2025-09-08 00:42:26.031777 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-08 00:42:26.031788 | orchestrator | Monday 08 September 2025 00:42:23 +0000 (0:00:00.166) 0:01:08.783 ****** 2025-09-08 00:42:26.031799 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031810 | orchestrator | 2025-09-08 00:42:26.031821 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-08 00:42:26.031832 | orchestrator | Monday 08 September 2025 00:42:23 +0000 (0:00:00.138) 0:01:08.922 ****** 2025-09-08 00:42:26.031843 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031854 | orchestrator | 2025-09-08 00:42:26.031865 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-08 00:42:26.031876 | orchestrator | Monday 08 September 2025 00:42:23 +0000 (0:00:00.151) 0:01:09.073 ****** 2025-09-08 00:42:26.031887 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031899 | orchestrator | 2025-09-08 00:42:26.031910 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-08 00:42:26.031921 | orchestrator | Monday 08 September 2025 00:42:23 +0000 (0:00:00.139) 0:01:09.213 ****** 2025-09-08 00:42:26.031932 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031943 | orchestrator | 2025-09-08 00:42:26.031954 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-08 00:42:26.031965 | orchestrator | Monday 08 September 2025 00:42:23 +0000 (0:00:00.141) 0:01:09.355 ****** 2025-09-08 00:42:26.031976 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.031987 | orchestrator | 2025-09-08 00:42:26.031998 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-08 00:42:26.032009 | orchestrator | Monday 08 September 2025 00:42:24 +0000 (0:00:00.137) 0:01:09.492 ****** 2025-09-08 00:42:26.032020 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032031 | orchestrator | 2025-09-08 00:42:26.032042 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-08 00:42:26.032053 | orchestrator | Monday 08 September 2025 00:42:24 +0000 (0:00:00.145) 0:01:09.638 ****** 2025-09-08 00:42:26.032064 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032075 | orchestrator | 2025-09-08 00:42:26.032086 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-08 00:42:26.032097 | orchestrator | Monday 08 September 2025 00:42:24 +0000 (0:00:00.139) 0:01:09.777 ****** 2025-09-08 00:42:26.032108 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032119 | orchestrator | 2025-09-08 00:42:26.032130 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-08 00:42:26.032141 | orchestrator | Monday 08 September 2025 00:42:24 +0000 (0:00:00.163) 0:01:09.941 ****** 2025-09-08 00:42:26.032152 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032163 | orchestrator | 2025-09-08 00:42:26.032174 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-08 00:42:26.032191 | orchestrator | Monday 08 September 2025 00:42:24 +0000 (0:00:00.362) 0:01:10.304 ****** 2025-09-08 00:42:26.032202 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032213 | orchestrator | 2025-09-08 00:42:26.032224 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-08 00:42:26.032235 | orchestrator | Monday 08 September 2025 00:42:24 +0000 (0:00:00.140) 0:01:10.445 ****** 2025-09-08 00:42:26.032246 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032265 | orchestrator | 2025-09-08 00:42:26.032276 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-08 00:42:26.032287 | orchestrator | Monday 08 September 2025 00:42:25 +0000 (0:00:00.135) 0:01:10.580 ****** 2025-09-08 00:42:26.032298 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032309 | orchestrator | 2025-09-08 00:42:26.032320 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-08 00:42:26.032332 | orchestrator | Monday 08 September 2025 00:42:25 +0000 (0:00:00.161) 0:01:10.742 ****** 2025-09-08 00:42:26.032343 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032354 | orchestrator | 2025-09-08 00:42:26.032365 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-08 00:42:26.032376 | orchestrator | Monday 08 September 2025 00:42:25 +0000 (0:00:00.143) 0:01:10.886 ****** 2025-09-08 00:42:26.032387 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032398 | orchestrator | 2025-09-08 00:42:26.032409 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-08 00:42:26.032420 | orchestrator | Monday 08 September 2025 00:42:25 +0000 (0:00:00.143) 0:01:11.029 ****** 2025-09-08 00:42:26.032431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:26.032443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:26.032454 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032465 | orchestrator | 2025-09-08 00:42:26.032476 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-08 00:42:26.032487 | orchestrator | Monday 08 September 2025 00:42:25 +0000 (0:00:00.152) 0:01:11.182 ****** 2025-09-08 00:42:26.032498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:26.032510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:26.032521 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:26.032532 | orchestrator | 2025-09-08 00:42:26.032543 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-08 00:42:26.032554 | orchestrator | Monday 08 September 2025 00:42:25 +0000 (0:00:00.163) 0:01:11.346 ****** 2025-09-08 00:42:26.032572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.965554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.965720 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.965738 | orchestrator | 2025-09-08 00:42:28.965751 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-08 00:42:28.965764 | orchestrator | Monday 08 September 2025 00:42:26 +0000 (0:00:00.158) 0:01:11.504 ****** 2025-09-08 00:42:28.965775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.965787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.965798 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.965812 | orchestrator | 2025-09-08 00:42:28.965832 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-08 00:42:28.965850 | orchestrator | Monday 08 September 2025 00:42:26 +0000 (0:00:00.154) 0:01:11.659 ****** 2025-09-08 00:42:28.965867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.965925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.965944 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.965962 | orchestrator | 2025-09-08 00:42:28.965982 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-08 00:42:28.966000 | orchestrator | Monday 08 September 2025 00:42:26 +0000 (0:00:00.145) 0:01:11.804 ****** 2025-09-08 00:42:28.966084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.966109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.966131 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.966151 | orchestrator | 2025-09-08 00:42:28.966192 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-08 00:42:28.966213 | orchestrator | Monday 08 September 2025 00:42:26 +0000 (0:00:00.155) 0:01:11.959 ****** 2025-09-08 00:42:28.966232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.966254 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.966275 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.966295 | orchestrator | 2025-09-08 00:42:28.966315 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-08 00:42:28.966334 | orchestrator | Monday 08 September 2025 00:42:26 +0000 (0:00:00.376) 0:01:12.335 ****** 2025-09-08 00:42:28.966354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.966374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.966394 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.966413 | orchestrator | 2025-09-08 00:42:28.966433 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-08 00:42:28.966451 | orchestrator | Monday 08 September 2025 00:42:27 +0000 (0:00:00.159) 0:01:12.495 ****** 2025-09-08 00:42:28.966470 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:28.966482 | orchestrator | 2025-09-08 00:42:28.966493 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-08 00:42:28.966504 | orchestrator | Monday 08 September 2025 00:42:27 +0000 (0:00:00.477) 0:01:12.972 ****** 2025-09-08 00:42:28.966514 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:28.966525 | orchestrator | 2025-09-08 00:42:28.966536 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-08 00:42:28.966546 | orchestrator | Monday 08 September 2025 00:42:27 +0000 (0:00:00.487) 0:01:13.459 ****** 2025-09-08 00:42:28.966557 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:28.966567 | orchestrator | 2025-09-08 00:42:28.966578 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-08 00:42:28.966589 | orchestrator | Monday 08 September 2025 00:42:28 +0000 (0:00:00.156) 0:01:13.615 ****** 2025-09-08 00:42:28.966599 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'vg_name': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'}) 2025-09-08 00:42:28.966612 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'vg_name': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'}) 2025-09-08 00:42:28.966622 | orchestrator | 2025-09-08 00:42:28.966633 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-08 00:42:28.966657 | orchestrator | Monday 08 September 2025 00:42:28 +0000 (0:00:00.190) 0:01:13.806 ****** 2025-09-08 00:42:28.966714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.966727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.966738 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.966749 | orchestrator | 2025-09-08 00:42:28.966760 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-08 00:42:28.966770 | orchestrator | Monday 08 September 2025 00:42:28 +0000 (0:00:00.162) 0:01:13.968 ****** 2025-09-08 00:42:28.966781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.966792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.966804 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.966815 | orchestrator | 2025-09-08 00:42:28.966825 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-08 00:42:28.966836 | orchestrator | Monday 08 September 2025 00:42:28 +0000 (0:00:00.155) 0:01:14.123 ****** 2025-09-08 00:42:28.966847 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'})  2025-09-08 00:42:28.966857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'})  2025-09-08 00:42:28.966868 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:28.966879 | orchestrator | 2025-09-08 00:42:28.966890 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-08 00:42:28.966900 | orchestrator | Monday 08 September 2025 00:42:28 +0000 (0:00:00.149) 0:01:14.273 ****** 2025-09-08 00:42:28.966911 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:28.966922 | orchestrator |  "lvm_report": { 2025-09-08 00:42:28.966933 | orchestrator |  "lv": [ 2025-09-08 00:42:28.966944 | orchestrator |  { 2025-09-08 00:42:28.966955 | orchestrator |  "lv_name": "osd-block-df550631-cfd3-5799-aa47-c702e103b9e1", 2025-09-08 00:42:28.966973 | orchestrator |  "vg_name": "ceph-df550631-cfd3-5799-aa47-c702e103b9e1" 2025-09-08 00:42:28.966984 | orchestrator |  }, 2025-09-08 00:42:28.966995 | orchestrator |  { 2025-09-08 00:42:28.967006 | orchestrator |  "lv_name": "osd-block-eee7454c-3e15-5681-817b-16336d12a7fd", 2025-09-08 00:42:28.967016 | orchestrator |  "vg_name": "ceph-eee7454c-3e15-5681-817b-16336d12a7fd" 2025-09-08 00:42:28.967027 | orchestrator |  } 2025-09-08 00:42:28.967038 | orchestrator |  ], 2025-09-08 00:42:28.967048 | orchestrator |  "pv": [ 2025-09-08 00:42:28.967059 | orchestrator |  { 2025-09-08 00:42:28.967069 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-08 00:42:28.967080 | orchestrator |  "vg_name": "ceph-df550631-cfd3-5799-aa47-c702e103b9e1" 2025-09-08 00:42:28.967091 | orchestrator |  }, 2025-09-08 00:42:28.967101 | orchestrator |  { 2025-09-08 00:42:28.967112 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-08 00:42:28.967123 | orchestrator |  "vg_name": "ceph-eee7454c-3e15-5681-817b-16336d12a7fd" 2025-09-08 00:42:28.967133 | orchestrator |  } 2025-09-08 00:42:28.967144 | orchestrator |  ] 2025-09-08 00:42:28.967154 | orchestrator |  } 2025-09-08 00:42:28.967165 | orchestrator | } 2025-09-08 00:42:28.967176 | orchestrator | 2025-09-08 00:42:28.967187 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:42:28.967205 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-08 00:42:28.967216 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-08 00:42:28.967226 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-08 00:42:28.967237 | orchestrator | 2025-09-08 00:42:28.967248 | orchestrator | 2025-09-08 00:42:28.967258 | orchestrator | 2025-09-08 00:42:28.967269 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:42:28.967280 | orchestrator | Monday 08 September 2025 00:42:28 +0000 (0:00:00.143) 0:01:14.416 ****** 2025-09-08 00:42:28.967290 | orchestrator | =============================================================================== 2025-09-08 00:42:28.967301 | orchestrator | Create block VGs -------------------------------------------------------- 5.59s 2025-09-08 00:42:28.967311 | orchestrator | Create block LVs -------------------------------------------------------- 4.18s 2025-09-08 00:42:28.967322 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.96s 2025-09-08 00:42:28.967333 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.79s 2025-09-08 00:42:28.967344 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-09-08 00:42:28.967354 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-09-08 00:42:28.967365 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.51s 2025-09-08 00:42:28.967375 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2025-09-08 00:42:28.967392 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2025-09-08 00:42:29.344495 | orchestrator | Print LVM report data --------------------------------------------------- 0.99s 2025-09-08 00:42:29.344619 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-09-08 00:42:29.344634 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2025-09-08 00:42:29.344645 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-09-08 00:42:29.344656 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2025-09-08 00:42:29.344712 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-09-08 00:42:29.344724 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2025-09-08 00:42:29.344735 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-09-08 00:42:29.344746 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.72s 2025-09-08 00:42:29.344757 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.72s 2025-09-08 00:42:29.344768 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-08 00:42:41.726436 | orchestrator | 2025-09-08 00:42:41 | INFO  | Task d83a4564-d01d-408d-8618-18eb4ec4cfe3 (facts) was prepared for execution. 2025-09-08 00:42:41.726573 | orchestrator | 2025-09-08 00:42:41 | INFO  | It takes a moment until task d83a4564-d01d-408d-8618-18eb4ec4cfe3 (facts) has been started and output is visible here. 2025-09-08 00:42:55.856506 | orchestrator | 2025-09-08 00:42:55.856699 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-08 00:42:55.856723 | orchestrator | 2025-09-08 00:42:55.856735 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-08 00:42:55.856748 | orchestrator | Monday 08 September 2025 00:42:45 +0000 (0:00:00.285) 0:00:00.285 ****** 2025-09-08 00:42:55.856759 | orchestrator | ok: [testbed-manager] 2025-09-08 00:42:55.856772 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:42:55.856821 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:42:55.856833 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:42:55.856845 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:42:55.856856 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:42:55.856867 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:55.856877 | orchestrator | 2025-09-08 00:42:55.856888 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-08 00:42:55.856899 | orchestrator | Monday 08 September 2025 00:42:46 +0000 (0:00:01.090) 0:00:01.376 ****** 2025-09-08 00:42:55.856910 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:42:55.856922 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:42:55.856934 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:42:55.856946 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:42:55.856957 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:42:55.856968 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:55.856979 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:55.856990 | orchestrator | 2025-09-08 00:42:55.857000 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:42:55.857012 | orchestrator | 2025-09-08 00:42:55.857023 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:42:55.857034 | orchestrator | Monday 08 September 2025 00:42:48 +0000 (0:00:01.231) 0:00:02.607 ****** 2025-09-08 00:42:55.857045 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:42:55.857056 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:42:55.857067 | orchestrator | ok: [testbed-manager] 2025-09-08 00:42:55.857079 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:42:55.857090 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:42:55.857101 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:42:55.857112 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:55.857124 | orchestrator | 2025-09-08 00:42:55.857135 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-08 00:42:55.857147 | orchestrator | 2025-09-08 00:42:55.857159 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-08 00:42:55.857171 | orchestrator | Monday 08 September 2025 00:42:54 +0000 (0:00:06.829) 0:00:09.436 ****** 2025-09-08 00:42:55.857182 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:42:55.857194 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:42:55.857205 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:42:55.857216 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:42:55.857227 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:42:55.857238 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:55.857250 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:55.857262 | orchestrator | 2025-09-08 00:42:55.857273 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:42:55.857285 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:55.857298 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:55.857309 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:55.857321 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:55.857332 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:55.857344 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:55.857356 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:55.857378 | orchestrator | 2025-09-08 00:42:55.857390 | orchestrator | 2025-09-08 00:42:55.857401 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:42:55.857412 | orchestrator | Monday 08 September 2025 00:42:55 +0000 (0:00:00.545) 0:00:09.982 ****** 2025-09-08 00:42:55.857422 | orchestrator | =============================================================================== 2025-09-08 00:42:55.857433 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.83s 2025-09-08 00:42:55.857443 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-09-08 00:42:55.857453 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-09-08 00:42:55.857464 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-09-08 00:43:08.279882 | orchestrator | 2025-09-08 00:43:08 | INFO  | Task 57ce25c9-1cdc-4766-a749-aaefaa879c1b (frr) was prepared for execution. 2025-09-08 00:43:08.280017 | orchestrator | 2025-09-08 00:43:08 | INFO  | It takes a moment until task 57ce25c9-1cdc-4766-a749-aaefaa879c1b (frr) has been started and output is visible here. 2025-09-08 00:43:34.156978 | orchestrator | 2025-09-08 00:43:34.157104 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-08 00:43:34.157122 | orchestrator | 2025-09-08 00:43:34.157136 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-08 00:43:34.157148 | orchestrator | Monday 08 September 2025 00:43:12 +0000 (0:00:00.242) 0:00:00.242 ****** 2025-09-08 00:43:34.157180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:43:34.157193 | orchestrator | 2025-09-08 00:43:34.157204 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-08 00:43:34.157215 | orchestrator | Monday 08 September 2025 00:43:12 +0000 (0:00:00.213) 0:00:00.456 ****** 2025-09-08 00:43:34.157227 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:34.157238 | orchestrator | 2025-09-08 00:43:34.157250 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-08 00:43:34.157261 | orchestrator | Monday 08 September 2025 00:43:13 +0000 (0:00:01.133) 0:00:01.589 ****** 2025-09-08 00:43:34.157271 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:34.157282 | orchestrator | 2025-09-08 00:43:34.157301 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-08 00:43:34.157313 | orchestrator | Monday 08 September 2025 00:43:23 +0000 (0:00:09.765) 0:00:11.355 ****** 2025-09-08 00:43:34.157324 | orchestrator | ok: [testbed-manager] 2025-09-08 00:43:34.157336 | orchestrator | 2025-09-08 00:43:34.157347 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-08 00:43:34.157357 | orchestrator | Monday 08 September 2025 00:43:24 +0000 (0:00:01.320) 0:00:12.675 ****** 2025-09-08 00:43:34.157368 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:34.157379 | orchestrator | 2025-09-08 00:43:34.157390 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-08 00:43:34.157401 | orchestrator | Monday 08 September 2025 00:43:25 +0000 (0:00:00.944) 0:00:13.620 ****** 2025-09-08 00:43:34.157412 | orchestrator | ok: [testbed-manager] 2025-09-08 00:43:34.157422 | orchestrator | 2025-09-08 00:43:34.157434 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-08 00:43:34.157445 | orchestrator | Monday 08 September 2025 00:43:26 +0000 (0:00:01.184) 0:00:14.805 ****** 2025-09-08 00:43:34.157456 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:43:34.157467 | orchestrator | 2025-09-08 00:43:34.157478 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-08 00:43:34.157489 | orchestrator | Monday 08 September 2025 00:43:27 +0000 (0:00:00.820) 0:00:15.626 ****** 2025-09-08 00:43:34.157499 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:43:34.157510 | orchestrator | 2025-09-08 00:43:34.157522 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-08 00:43:34.157557 | orchestrator | Monday 08 September 2025 00:43:27 +0000 (0:00:00.155) 0:00:15.782 ****** 2025-09-08 00:43:34.157568 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:34.157580 | orchestrator | 2025-09-08 00:43:34.157591 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-08 00:43:34.157601 | orchestrator | Monday 08 September 2025 00:43:28 +0000 (0:00:00.976) 0:00:16.758 ****** 2025-09-08 00:43:34.157612 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-08 00:43:34.157651 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-08 00:43:34.157663 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-08 00:43:34.157674 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-08 00:43:34.157685 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-08 00:43:34.157695 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-08 00:43:34.157706 | orchestrator | 2025-09-08 00:43:34.157717 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-08 00:43:34.157728 | orchestrator | Monday 08 September 2025 00:43:31 +0000 (0:00:02.213) 0:00:18.972 ****** 2025-09-08 00:43:34.157739 | orchestrator | ok: [testbed-manager] 2025-09-08 00:43:34.157750 | orchestrator | 2025-09-08 00:43:34.157761 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-08 00:43:34.157771 | orchestrator | Monday 08 September 2025 00:43:32 +0000 (0:00:01.463) 0:00:20.435 ****** 2025-09-08 00:43:34.157782 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:34.157793 | orchestrator | 2025-09-08 00:43:34.157804 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:43:34.157815 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:43:34.157826 | orchestrator | 2025-09-08 00:43:34.157837 | orchestrator | 2025-09-08 00:43:34.157848 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:43:34.157859 | orchestrator | Monday 08 September 2025 00:43:33 +0000 (0:00:01.386) 0:00:21.822 ****** 2025-09-08 00:43:34.157869 | orchestrator | =============================================================================== 2025-09-08 00:43:34.157880 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.77s 2025-09-08 00:43:34.157891 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.21s 2025-09-08 00:43:34.157902 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.46s 2025-09-08 00:43:34.157912 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.39s 2025-09-08 00:43:34.157941 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.32s 2025-09-08 00:43:34.157952 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.18s 2025-09-08 00:43:34.157963 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.13s 2025-09-08 00:43:34.157974 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.98s 2025-09-08 00:43:34.157985 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.94s 2025-09-08 00:43:34.157996 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.82s 2025-09-08 00:43:34.158007 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2025-09-08 00:43:34.158074 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-08 00:43:34.359551 | orchestrator | 2025-09-08 00:43:34.361404 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Sep 8 00:43:34 UTC 2025 2025-09-08 00:43:34.361454 | orchestrator | 2025-09-08 00:43:36.047159 | orchestrator | 2025-09-08 00:43:36 | INFO  | Collection nutshell is prepared for execution 2025-09-08 00:43:36.047266 | orchestrator | 2025-09-08 00:43:36 | INFO  | D [0] - dotfiles 2025-09-08 00:43:46.134439 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [0] - homer 2025-09-08 00:43:46.134555 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [0] - netdata 2025-09-08 00:43:46.134571 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [0] - openstackclient 2025-09-08 00:43:46.134600 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [0] - phpmyadmin 2025-09-08 00:43:46.134649 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [0] - common 2025-09-08 00:43:46.138658 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [1] -- loadbalancer 2025-09-08 00:43:46.138969 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [2] --- opensearch 2025-09-08 00:43:46.138995 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [2] --- mariadb-ng 2025-09-08 00:43:46.139008 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [3] ---- horizon 2025-09-08 00:43:46.139020 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [3] ---- keystone 2025-09-08 00:43:46.139275 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [4] ----- neutron 2025-09-08 00:43:46.139556 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [5] ------ wait-for-nova 2025-09-08 00:43:46.139577 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [5] ------ octavia 2025-09-08 00:43:46.141017 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [4] ----- barbican 2025-09-08 00:43:46.141038 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [4] ----- designate 2025-09-08 00:43:46.141451 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [4] ----- ironic 2025-09-08 00:43:46.141471 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [4] ----- placement 2025-09-08 00:43:46.141482 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [4] ----- magnum 2025-09-08 00:43:46.142287 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [1] -- openvswitch 2025-09-08 00:43:46.142311 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [2] --- ovn 2025-09-08 00:43:46.142582 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [1] -- memcached 2025-09-08 00:43:46.142888 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [1] -- redis 2025-09-08 00:43:46.142909 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [1] -- rabbitmq-ng 2025-09-08 00:43:46.143247 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [0] - kubernetes 2025-09-08 00:43:46.145605 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [1] -- kubeconfig 2025-09-08 00:43:46.145741 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [1] -- copy-kubeconfig 2025-09-08 00:43:46.145958 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [0] - ceph 2025-09-08 00:43:46.148554 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [1] -- ceph-pools 2025-09-08 00:43:46.148598 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [2] --- copy-ceph-keys 2025-09-08 00:43:46.148636 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [3] ---- cephclient 2025-09-08 00:43:46.148647 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-08 00:43:46.148659 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [4] ----- wait-for-keystone 2025-09-08 00:43:46.148802 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-08 00:43:46.148936 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [5] ------ glance 2025-09-08 00:43:46.148957 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [5] ------ cinder 2025-09-08 00:43:46.148969 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [5] ------ nova 2025-09-08 00:43:46.149562 | orchestrator | 2025-09-08 00:43:46 | INFO  | A [4] ----- prometheus 2025-09-08 00:43:46.149584 | orchestrator | 2025-09-08 00:43:46 | INFO  | D [5] ------ grafana 2025-09-08 00:43:46.314433 | orchestrator | 2025-09-08 00:43:46 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-08 00:43:46.314503 | orchestrator | 2025-09-08 00:43:46 | INFO  | Tasks are running in the background 2025-09-08 00:43:48.811310 | orchestrator | 2025-09-08 00:43:48 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-08 00:43:50.916351 | orchestrator | 2025-09-08 00:43:50 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:43:50.916459 | orchestrator | 2025-09-08 00:43:50 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:43:50.916700 | orchestrator | 2025-09-08 00:43:50 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:43:50.919193 | orchestrator | 2025-09-08 00:43:50 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:43:50.919591 | orchestrator | 2025-09-08 00:43:50 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state STARTED 2025-09-08 00:43:50.920070 | orchestrator | 2025-09-08 00:43:50 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:43:50.920582 | orchestrator | 2025-09-08 00:43:50 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:43:50.920627 | orchestrator | 2025-09-08 00:43:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:53.956521 | orchestrator | 2025-09-08 00:43:53 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:43:53.957355 | orchestrator | 2025-09-08 00:43:53 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:43:53.957396 | orchestrator | 2025-09-08 00:43:53 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:43:53.958148 | orchestrator | 2025-09-08 00:43:53 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:43:53.958574 | orchestrator | 2025-09-08 00:43:53 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state STARTED 2025-09-08 00:43:53.959302 | orchestrator | 2025-09-08 00:43:53 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:43:53.960697 | orchestrator | 2025-09-08 00:43:53 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:43:53.960735 | orchestrator | 2025-09-08 00:43:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:57.012266 | orchestrator | 2025-09-08 00:43:57 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:43:57.012469 | orchestrator | 2025-09-08 00:43:57 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:43:57.012498 | orchestrator | 2025-09-08 00:43:57 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:43:57.012898 | orchestrator | 2025-09-08 00:43:57 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:43:57.013351 | orchestrator | 2025-09-08 00:43:57 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state STARTED 2025-09-08 00:43:57.014145 | orchestrator | 2025-09-08 00:43:57 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:43:57.017453 | orchestrator | 2025-09-08 00:43:57 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:43:57.017476 | orchestrator | 2025-09-08 00:43:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:00.072882 | orchestrator | 2025-09-08 00:44:00 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:00.073007 | orchestrator | 2025-09-08 00:44:00 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:00.079320 | orchestrator | 2025-09-08 00:44:00 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:00.086945 | orchestrator | 2025-09-08 00:44:00 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:00.091533 | orchestrator | 2025-09-08 00:44:00 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state STARTED 2025-09-08 00:44:00.118012 | orchestrator | 2025-09-08 00:44:00 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:00.120667 | orchestrator | 2025-09-08 00:44:00 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:00.120690 | orchestrator | 2025-09-08 00:44:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:03.311151 | orchestrator | 2025-09-08 00:44:03 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:03.311289 | orchestrator | 2025-09-08 00:44:03 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:03.311306 | orchestrator | 2025-09-08 00:44:03 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:03.311318 | orchestrator | 2025-09-08 00:44:03 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:03.311330 | orchestrator | 2025-09-08 00:44:03 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state STARTED 2025-09-08 00:44:03.311341 | orchestrator | 2025-09-08 00:44:03 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:03.311352 | orchestrator | 2025-09-08 00:44:03 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:03.311363 | orchestrator | 2025-09-08 00:44:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:06.395918 | orchestrator | 2025-09-08 00:44:06 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:06.396066 | orchestrator | 2025-09-08 00:44:06 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:06.396082 | orchestrator | 2025-09-08 00:44:06 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:06.396094 | orchestrator | 2025-09-08 00:44:06 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:06.396105 | orchestrator | 2025-09-08 00:44:06 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state STARTED 2025-09-08 00:44:06.396115 | orchestrator | 2025-09-08 00:44:06 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:06.396126 | orchestrator | 2025-09-08 00:44:06 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:06.396137 | orchestrator | 2025-09-08 00:44:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:09.463657 | orchestrator | 2025-09-08 00:44:09 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:09.467146 | orchestrator | 2025-09-08 00:44:09 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:09.467186 | orchestrator | 2025-09-08 00:44:09 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:09.467881 | orchestrator | 2025-09-08 00:44:09 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:09.468945 | orchestrator | 2025-09-08 00:44:09 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state STARTED 2025-09-08 00:44:09.470441 | orchestrator | 2025-09-08 00:44:09 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:09.470529 | orchestrator | 2025-09-08 00:44:09 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:09.470545 | orchestrator | 2025-09-08 00:44:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:12.730117 | orchestrator | 2025-09-08 00:44:12.730228 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-08 00:44:12.730242 | orchestrator | 2025-09-08 00:44:12.730253 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-08 00:44:12.730263 | orchestrator | Monday 08 September 2025 00:43:58 +0000 (0:00:00.777) 0:00:00.777 ****** 2025-09-08 00:44:12.730274 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:44:12.730285 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:44:12.730295 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:44:12.730304 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:44:12.730314 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:44:12.730324 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:12.730333 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:44:12.730343 | orchestrator | 2025-09-08 00:44:12.730353 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-08 00:44:12.730362 | orchestrator | Monday 08 September 2025 00:44:03 +0000 (0:00:04.976) 0:00:05.754 ****** 2025-09-08 00:44:12.730373 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-08 00:44:12.730383 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-08 00:44:12.730392 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-08 00:44:12.730402 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-08 00:44:12.730411 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-08 00:44:12.730421 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-08 00:44:12.730430 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-08 00:44:12.730439 | orchestrator | 2025-09-08 00:44:12.730450 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-08 00:44:12.730460 | orchestrator | Monday 08 September 2025 00:44:04 +0000 (0:00:01.154) 0:00:06.909 ****** 2025-09-08 00:44:12.730483 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:44:03.677271', 'end': '2025-09-08 00:44:03.687294', 'delta': '0:00:00.010023', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:44:12.730499 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:44:03.552229', 'end': '2025-09-08 00:44:03.561167', 'delta': '0:00:00.008938', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:44:12.730534 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:44:03.948892', 'end': '2025-09-08 00:44:03.955212', 'delta': '0:00:00.006320', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:44:12.730574 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:44:04.059897', 'end': '2025-09-08 00:44:04.064733', 'delta': '0:00:00.004836', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:44:12.730588 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:44:04.131570', 'end': '2025-09-08 00:44:04.141214', 'delta': '0:00:00.009644', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:44:12.730915 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:44:04.168124', 'end': '2025-09-08 00:44:04.177335', 'delta': '0:00:00.009211', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:44:12.730930 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:44:03.784963', 'end': '2025-09-08 00:44:03.791084', 'delta': '0:00:00.006121', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:44:12.730956 | orchestrator | 2025-09-08 00:44:12.730967 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-08 00:44:12.730976 | orchestrator | Monday 08 September 2025 00:44:06 +0000 (0:00:01.960) 0:00:08.870 ****** 2025-09-08 00:44:12.730986 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-08 00:44:12.730996 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-08 00:44:12.731006 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-08 00:44:12.731015 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-08 00:44:12.731025 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-08 00:44:12.731034 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-08 00:44:12.731044 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-08 00:44:12.731053 | orchestrator | 2025-09-08 00:44:12.731063 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-08 00:44:12.731072 | orchestrator | Monday 08 September 2025 00:44:07 +0000 (0:00:01.170) 0:00:10.040 ****** 2025-09-08 00:44:12.731086 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-08 00:44:12.731097 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-08 00:44:12.731106 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-08 00:44:12.731116 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-08 00:44:12.731125 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-08 00:44:12.731135 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-08 00:44:12.731145 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-08 00:44:12.731154 | orchestrator | 2025-09-08 00:44:12.731164 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:44:12.731182 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:12.731194 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:12.731204 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:12.731213 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:12.731223 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:12.731232 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:12.731242 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:12.731251 | orchestrator | 2025-09-08 00:44:12.731261 | orchestrator | 2025-09-08 00:44:12.731271 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:44:12.731280 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:01.999) 0:00:12.040 ****** 2025-09-08 00:44:12.731290 | orchestrator | =============================================================================== 2025-09-08 00:44:12.731299 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.98s 2025-09-08 00:44:12.731309 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.00s 2025-09-08 00:44:12.731326 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.96s 2025-09-08 00:44:12.731336 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.17s 2025-09-08 00:44:12.731345 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.15s 2025-09-08 00:44:12.731355 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:12.731365 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:12.731375 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:12.731384 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:12.731394 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:12.731403 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task 5bd6e698-2598-4280-bc02-9db70e0e9d4d is in state SUCCESS 2025-09-08 00:44:12.731413 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:12.731422 | orchestrator | 2025-09-08 00:44:12 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:12.731432 | orchestrator | 2025-09-08 00:44:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:15.681397 | orchestrator | 2025-09-08 00:44:15 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:15.681508 | orchestrator | 2025-09-08 00:44:15 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:15.681522 | orchestrator | 2025-09-08 00:44:15 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:15.681534 | orchestrator | 2025-09-08 00:44:15 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:15.681545 | orchestrator | 2025-09-08 00:44:15 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:15.681556 | orchestrator | 2025-09-08 00:44:15 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:15.681583 | orchestrator | 2025-09-08 00:44:15 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:15.681620 | orchestrator | 2025-09-08 00:44:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:18.742507 | orchestrator | 2025-09-08 00:44:18 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:18.742684 | orchestrator | 2025-09-08 00:44:18 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:18.742701 | orchestrator | 2025-09-08 00:44:18 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:18.742714 | orchestrator | 2025-09-08 00:44:18 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:18.744119 | orchestrator | 2025-09-08 00:44:18 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:18.748333 | orchestrator | 2025-09-08 00:44:18 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:18.752282 | orchestrator | 2025-09-08 00:44:18 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:18.752321 | orchestrator | 2025-09-08 00:44:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:21.914139 | orchestrator | 2025-09-08 00:44:21 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:21.915149 | orchestrator | 2025-09-08 00:44:21 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:21.916619 | orchestrator | 2025-09-08 00:44:21 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:21.917729 | orchestrator | 2025-09-08 00:44:21 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:21.918690 | orchestrator | 2025-09-08 00:44:21 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:21.919405 | orchestrator | 2025-09-08 00:44:21 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:21.920573 | orchestrator | 2025-09-08 00:44:21 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:21.920627 | orchestrator | 2025-09-08 00:44:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:24.959482 | orchestrator | 2025-09-08 00:44:24 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:24.959841 | orchestrator | 2025-09-08 00:44:24 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:24.960245 | orchestrator | 2025-09-08 00:44:24 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:24.961741 | orchestrator | 2025-09-08 00:44:24 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:24.962714 | orchestrator | 2025-09-08 00:44:24 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:24.963405 | orchestrator | 2025-09-08 00:44:24 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:24.964127 | orchestrator | 2025-09-08 00:44:24 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:24.964151 | orchestrator | 2025-09-08 00:44:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:27.988417 | orchestrator | 2025-09-08 00:44:27 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:27.988548 | orchestrator | 2025-09-08 00:44:27 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:27.990449 | orchestrator | 2025-09-08 00:44:27 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:27.991029 | orchestrator | 2025-09-08 00:44:27 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:27.991953 | orchestrator | 2025-09-08 00:44:27 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:27.994955 | orchestrator | 2025-09-08 00:44:27 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:27.995975 | orchestrator | 2025-09-08 00:44:27 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:27.995999 | orchestrator | 2025-09-08 00:44:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:31.024910 | orchestrator | 2025-09-08 00:44:31 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:31.025754 | orchestrator | 2025-09-08 00:44:31 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:31.028205 | orchestrator | 2025-09-08 00:44:31 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:31.029323 | orchestrator | 2025-09-08 00:44:31 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:31.031540 | orchestrator | 2025-09-08 00:44:31 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:31.032610 | orchestrator | 2025-09-08 00:44:31 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:31.035184 | orchestrator | 2025-09-08 00:44:31 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:31.035414 | orchestrator | 2025-09-08 00:44:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:34.079137 | orchestrator | 2025-09-08 00:44:34 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:34.079251 | orchestrator | 2025-09-08 00:44:34 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:34.079266 | orchestrator | 2025-09-08 00:44:34 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:34.079278 | orchestrator | 2025-09-08 00:44:34 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:34.079289 | orchestrator | 2025-09-08 00:44:34 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:34.079300 | orchestrator | 2025-09-08 00:44:34 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:34.079311 | orchestrator | 2025-09-08 00:44:34 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state STARTED 2025-09-08 00:44:34.079322 | orchestrator | 2025-09-08 00:44:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:37.125271 | orchestrator | 2025-09-08 00:44:37 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:37.127920 | orchestrator | 2025-09-08 00:44:37 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:37.132309 | orchestrator | 2025-09-08 00:44:37 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:37.135925 | orchestrator | 2025-09-08 00:44:37 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:37.139178 | orchestrator | 2025-09-08 00:44:37 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:37.144014 | orchestrator | 2025-09-08 00:44:37 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:37.146323 | orchestrator | 2025-09-08 00:44:37 | INFO  | Task 151c337b-3c73-45aa-acc7-375bb7458489 is in state SUCCESS 2025-09-08 00:44:37.146367 | orchestrator | 2025-09-08 00:44:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:40.253690 | orchestrator | 2025-09-08 00:44:40 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:40.253801 | orchestrator | 2025-09-08 00:44:40 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:40.253814 | orchestrator | 2025-09-08 00:44:40 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:40.253824 | orchestrator | 2025-09-08 00:44:40 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:40.253833 | orchestrator | 2025-09-08 00:44:40 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:40.253843 | orchestrator | 2025-09-08 00:44:40 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:40.253854 | orchestrator | 2025-09-08 00:44:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:43.286175 | orchestrator | 2025-09-08 00:44:43 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:43.289946 | orchestrator | 2025-09-08 00:44:43 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:43.295507 | orchestrator | 2025-09-08 00:44:43 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:43.295540 | orchestrator | 2025-09-08 00:44:43 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state STARTED 2025-09-08 00:44:43.295553 | orchestrator | 2025-09-08 00:44:43 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:43.298630 | orchestrator | 2025-09-08 00:44:43 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:43.298680 | orchestrator | 2025-09-08 00:44:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:46.358766 | orchestrator | 2025-09-08 00:44:46 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:46.358871 | orchestrator | 2025-09-08 00:44:46 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:46.360806 | orchestrator | 2025-09-08 00:44:46 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:46.363331 | orchestrator | 2025-09-08 00:44:46 | INFO  | Task 9aebbfdb-f01e-4e0a-b6eb-b476f80cde27 is in state SUCCESS 2025-09-08 00:44:46.363357 | orchestrator | 2025-09-08 00:44:46 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:46.364074 | orchestrator | 2025-09-08 00:44:46 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:46.364190 | orchestrator | 2025-09-08 00:44:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:49.457525 | orchestrator | 2025-09-08 00:44:49 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:49.457691 | orchestrator | 2025-09-08 00:44:49 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:49.457708 | orchestrator | 2025-09-08 00:44:49 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:49.457720 | orchestrator | 2025-09-08 00:44:49 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:49.458775 | orchestrator | 2025-09-08 00:44:49 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:49.458804 | orchestrator | 2025-09-08 00:44:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:52.548715 | orchestrator | 2025-09-08 00:44:52 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:52.549503 | orchestrator | 2025-09-08 00:44:52 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:52.551187 | orchestrator | 2025-09-08 00:44:52 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:52.552720 | orchestrator | 2025-09-08 00:44:52 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:52.556601 | orchestrator | 2025-09-08 00:44:52 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:52.556638 | orchestrator | 2025-09-08 00:44:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:55.605065 | orchestrator | 2025-09-08 00:44:55 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:55.610659 | orchestrator | 2025-09-08 00:44:55 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:55.610974 | orchestrator | 2025-09-08 00:44:55 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:55.612770 | orchestrator | 2025-09-08 00:44:55 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:55.613032 | orchestrator | 2025-09-08 00:44:55 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:55.613468 | orchestrator | 2025-09-08 00:44:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:58.712752 | orchestrator | 2025-09-08 00:44:58 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:44:58.715491 | orchestrator | 2025-09-08 00:44:58 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:44:58.716385 | orchestrator | 2025-09-08 00:44:58 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:44:58.716777 | orchestrator | 2025-09-08 00:44:58 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:44:58.717755 | orchestrator | 2025-09-08 00:44:58 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:44:58.717776 | orchestrator | 2025-09-08 00:44:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:01.778258 | orchestrator | 2025-09-08 00:45:01 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:01.780839 | orchestrator | 2025-09-08 00:45:01 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:01.782513 | orchestrator | 2025-09-08 00:45:01 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:01.784991 | orchestrator | 2025-09-08 00:45:01 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:01.786378 | orchestrator | 2025-09-08 00:45:01 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:45:01.786404 | orchestrator | 2025-09-08 00:45:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:04.922709 | orchestrator | 2025-09-08 00:45:04 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:04.925379 | orchestrator | 2025-09-08 00:45:04 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:04.930772 | orchestrator | 2025-09-08 00:45:04 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:04.931641 | orchestrator | 2025-09-08 00:45:04 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:04.935299 | orchestrator | 2025-09-08 00:45:04 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:45:04.935421 | orchestrator | 2025-09-08 00:45:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:07.980785 | orchestrator | 2025-09-08 00:45:07 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:07.982523 | orchestrator | 2025-09-08 00:45:07 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:07.990527 | orchestrator | 2025-09-08 00:45:07 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:07.992676 | orchestrator | 2025-09-08 00:45:07 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:07.994101 | orchestrator | 2025-09-08 00:45:07 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:45:07.994127 | orchestrator | 2025-09-08 00:45:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:11.111421 | orchestrator | 2025-09-08 00:45:11 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:11.111514 | orchestrator | 2025-09-08 00:45:11 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:11.111529 | orchestrator | 2025-09-08 00:45:11 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:11.111605 | orchestrator | 2025-09-08 00:45:11 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:11.111619 | orchestrator | 2025-09-08 00:45:11 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:45:11.111630 | orchestrator | 2025-09-08 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:14.109604 | orchestrator | 2025-09-08 00:45:14 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:14.111015 | orchestrator | 2025-09-08 00:45:14 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:14.112766 | orchestrator | 2025-09-08 00:45:14 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:14.115960 | orchestrator | 2025-09-08 00:45:14 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:14.117867 | orchestrator | 2025-09-08 00:45:14 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state STARTED 2025-09-08 00:45:14.117893 | orchestrator | 2025-09-08 00:45:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:17.162726 | orchestrator | 2025-09-08 00:45:17 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:17.163898 | orchestrator | 2025-09-08 00:45:17 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:17.165123 | orchestrator | 2025-09-08 00:45:17 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:17.167077 | orchestrator | 2025-09-08 00:45:17 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:17.167833 | orchestrator | 2025-09-08 00:45:17 | INFO  | Task 44cb5c0c-c784-4084-94bd-e05823fb1bcb is in state SUCCESS 2025-09-08 00:45:17.169692 | orchestrator | 2025-09-08 00:45:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:17.171711 | orchestrator | 2025-09-08 00:45:17.171751 | orchestrator | 2025-09-08 00:45:17.171763 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-08 00:45:17.171775 | orchestrator | 2025-09-08 00:45:17.171786 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-08 00:45:17.171799 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:00.692) 0:00:00.692 ****** 2025-09-08 00:45:17.171810 | orchestrator | ok: [testbed-manager] => { 2025-09-08 00:45:17.171823 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-08 00:45:17.171836 | orchestrator | } 2025-09-08 00:45:17.171847 | orchestrator | 2025-09-08 00:45:17.171858 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-08 00:45:17.171869 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:00.277) 0:00:00.969 ****** 2025-09-08 00:45:17.171880 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.171892 | orchestrator | 2025-09-08 00:45:17.171903 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-08 00:45:17.171913 | orchestrator | Monday 08 September 2025 00:44:00 +0000 (0:00:02.917) 0:00:03.887 ****** 2025-09-08 00:45:17.171924 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-08 00:45:17.171935 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-08 00:45:17.171946 | orchestrator | 2025-09-08 00:45:17.171991 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-08 00:45:17.172003 | orchestrator | Monday 08 September 2025 00:44:03 +0000 (0:00:02.280) 0:00:06.167 ****** 2025-09-08 00:45:17.172042 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172053 | orchestrator | 2025-09-08 00:45:17.172064 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-08 00:45:17.172094 | orchestrator | Monday 08 September 2025 00:44:04 +0000 (0:00:01.865) 0:00:08.033 ****** 2025-09-08 00:45:17.172105 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172116 | orchestrator | 2025-09-08 00:45:17.172126 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-08 00:45:17.172137 | orchestrator | Monday 08 September 2025 00:44:07 +0000 (0:00:02.778) 0:00:10.811 ****** 2025-09-08 00:45:17.172148 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-08 00:45:17.172159 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.172169 | orchestrator | 2025-09-08 00:45:17.172180 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-08 00:45:17.172191 | orchestrator | Monday 08 September 2025 00:44:32 +0000 (0:00:25.111) 0:00:35.923 ****** 2025-09-08 00:45:17.172201 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172212 | orchestrator | 2025-09-08 00:45:17.172223 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:45:17.172234 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.172246 | orchestrator | 2025-09-08 00:45:17.172257 | orchestrator | 2025-09-08 00:45:17.172267 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:45:17.172279 | orchestrator | Monday 08 September 2025 00:44:35 +0000 (0:00:02.402) 0:00:38.325 ****** 2025-09-08 00:45:17.172292 | orchestrator | =============================================================================== 2025-09-08 00:45:17.172305 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.11s 2025-09-08 00:45:17.172318 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.92s 2025-09-08 00:45:17.172330 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.78s 2025-09-08 00:45:17.172343 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.40s 2025-09-08 00:45:17.172355 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.28s 2025-09-08 00:45:17.172367 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.87s 2025-09-08 00:45:17.172380 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.28s 2025-09-08 00:45:17.172392 | orchestrator | 2025-09-08 00:45:17.172404 | orchestrator | 2025-09-08 00:45:17.172417 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-08 00:45:17.172430 | orchestrator | 2025-09-08 00:45:17.172443 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-08 00:45:17.172456 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:00.415) 0:00:00.415 ****** 2025-09-08 00:45:17.172470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-08 00:45:17.172484 | orchestrator | 2025-09-08 00:45:17.172496 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-08 00:45:17.172509 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:00.406) 0:00:00.822 ****** 2025-09-08 00:45:17.172521 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-08 00:45:17.172534 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-08 00:45:17.172548 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-08 00:45:17.172560 | orchestrator | 2025-09-08 00:45:17.172608 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-08 00:45:17.172621 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:03.184) 0:00:04.007 ****** 2025-09-08 00:45:17.172634 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172644 | orchestrator | 2025-09-08 00:45:17.172655 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-08 00:45:17.172673 | orchestrator | Monday 08 September 2025 00:44:03 +0000 (0:00:02.566) 0:00:06.573 ****** 2025-09-08 00:45:17.172696 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-08 00:45:17.172707 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.172718 | orchestrator | 2025-09-08 00:45:17.172729 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-08 00:45:17.172739 | orchestrator | Monday 08 September 2025 00:44:37 +0000 (0:00:33.614) 0:00:40.187 ****** 2025-09-08 00:45:17.172750 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172761 | orchestrator | 2025-09-08 00:45:17.172777 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-08 00:45:17.172788 | orchestrator | Monday 08 September 2025 00:44:38 +0000 (0:00:01.078) 0:00:41.266 ****** 2025-09-08 00:45:17.172798 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.172809 | orchestrator | 2025-09-08 00:45:17.172820 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-08 00:45:17.172830 | orchestrator | Monday 08 September 2025 00:44:39 +0000 (0:00:00.845) 0:00:42.112 ****** 2025-09-08 00:45:17.172841 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172852 | orchestrator | 2025-09-08 00:45:17.172862 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-08 00:45:17.172873 | orchestrator | Monday 08 September 2025 00:44:42 +0000 (0:00:03.164) 0:00:45.277 ****** 2025-09-08 00:45:17.172884 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172894 | orchestrator | 2025-09-08 00:45:17.172905 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-08 00:45:17.172916 | orchestrator | Monday 08 September 2025 00:44:43 +0000 (0:00:01.530) 0:00:46.807 ****** 2025-09-08 00:45:17.172926 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.172937 | orchestrator | 2025-09-08 00:45:17.172947 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-08 00:45:17.172958 | orchestrator | Monday 08 September 2025 00:44:44 +0000 (0:00:00.856) 0:00:47.663 ****** 2025-09-08 00:45:17.172969 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.172979 | orchestrator | 2025-09-08 00:45:17.172990 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:45:17.173001 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.173012 | orchestrator | 2025-09-08 00:45:17.173022 | orchestrator | 2025-09-08 00:45:17.173033 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:45:17.173044 | orchestrator | Monday 08 September 2025 00:44:45 +0000 (0:00:00.760) 0:00:48.424 ****** 2025-09-08 00:45:17.173054 | orchestrator | =============================================================================== 2025-09-08 00:45:17.173065 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.61s 2025-09-08 00:45:17.173076 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.18s 2025-09-08 00:45:17.173086 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.16s 2025-09-08 00:45:17.173097 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.57s 2025-09-08 00:45:17.173108 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.53s 2025-09-08 00:45:17.173119 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.08s 2025-09-08 00:45:17.173129 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.86s 2025-09-08 00:45:17.173140 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.85s 2025-09-08 00:45:17.173150 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.76s 2025-09-08 00:45:17.173161 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.41s 2025-09-08 00:45:17.173172 | orchestrator | 2025-09-08 00:45:17.173182 | orchestrator | 2025-09-08 00:45:17.173199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:45:17.173209 | orchestrator | 2025-09-08 00:45:17.173220 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:45:17.173231 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:00.418) 0:00:00.418 ****** 2025-09-08 00:45:17.173242 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-08 00:45:17.173252 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-08 00:45:17.173263 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-08 00:45:17.173273 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-08 00:45:17.173284 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-08 00:45:17.173295 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-08 00:45:17.173305 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-08 00:45:17.173316 | orchestrator | 2025-09-08 00:45:17.173326 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-08 00:45:17.173337 | orchestrator | 2025-09-08 00:45:17.173348 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-08 00:45:17.173358 | orchestrator | Monday 08 September 2025 00:43:58 +0000 (0:00:01.710) 0:00:02.128 ****** 2025-09-08 00:45:17.173384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:45:17.173398 | orchestrator | 2025-09-08 00:45:17.173409 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-08 00:45:17.173420 | orchestrator | Monday 08 September 2025 00:43:59 +0000 (0:00:01.311) 0:00:03.439 ****** 2025-09-08 00:45:17.173430 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:17.173441 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:17.173452 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:17.173462 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:17.173473 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:17.173490 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:17.173501 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.173512 | orchestrator | 2025-09-08 00:45:17.173523 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-08 00:45:17.173533 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:02.077) 0:00:05.517 ****** 2025-09-08 00:45:17.173544 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:17.173554 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:17.173582 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:17.173598 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:17.173609 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:17.173620 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:17.173631 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.173641 | orchestrator | 2025-09-08 00:45:17.173652 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-08 00:45:17.173663 | orchestrator | Monday 08 September 2025 00:44:04 +0000 (0:00:02.870) 0:00:08.387 ****** 2025-09-08 00:45:17.173674 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:17.173685 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:17.173696 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:17.173707 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:17.173717 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:17.173728 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:17.173739 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.173750 | orchestrator | 2025-09-08 00:45:17.173760 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-08 00:45:17.173771 | orchestrator | Monday 08 September 2025 00:44:06 +0000 (0:00:02.307) 0:00:10.694 ****** 2025-09-08 00:45:17.173782 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:17.173799 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:17.173810 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:17.173821 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:17.173831 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:17.173842 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:17.173853 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.173863 | orchestrator | 2025-09-08 00:45:17.173874 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-08 00:45:17.173885 | orchestrator | Monday 08 September 2025 00:44:19 +0000 (0:00:12.223) 0:00:22.917 ****** 2025-09-08 00:45:17.173896 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:17.173907 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:17.173917 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:17.173928 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:17.173939 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:17.173949 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:17.173960 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.173971 | orchestrator | 2025-09-08 00:45:17.173982 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-08 00:45:17.173993 | orchestrator | Monday 08 September 2025 00:44:52 +0000 (0:00:32.845) 0:00:55.763 ****** 2025-09-08 00:45:17.174004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:45:17.174072 | orchestrator | 2025-09-08 00:45:17.174086 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-08 00:45:17.174097 | orchestrator | Monday 08 September 2025 00:44:53 +0000 (0:00:01.499) 0:00:57.262 ****** 2025-09-08 00:45:17.174108 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-08 00:45:17.174119 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-08 00:45:17.174130 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-08 00:45:17.174141 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-08 00:45:17.174152 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-08 00:45:17.174162 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-08 00:45:17.174173 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-08 00:45:17.174184 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-08 00:45:17.174194 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-08 00:45:17.174205 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-08 00:45:17.174216 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-08 00:45:17.174226 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-08 00:45:17.174237 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-08 00:45:17.174248 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-08 00:45:17.174259 | orchestrator | 2025-09-08 00:45:17.174269 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-08 00:45:17.174280 | orchestrator | Monday 08 September 2025 00:45:00 +0000 (0:00:07.097) 0:01:04.360 ****** 2025-09-08 00:45:17.174291 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:17.174302 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.174313 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:17.174324 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:17.174334 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:17.174345 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:17.174356 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:17.174367 | orchestrator | 2025-09-08 00:45:17.174377 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-08 00:45:17.174388 | orchestrator | Monday 08 September 2025 00:45:02 +0000 (0:00:01.440) 0:01:05.800 ****** 2025-09-08 00:45:17.174399 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:17.174416 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:17.174427 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.174438 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:17.174448 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:17.174459 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:17.174470 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:17.174480 | orchestrator | 2025-09-08 00:45:17.174491 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-08 00:45:17.174509 | orchestrator | Monday 08 September 2025 00:45:03 +0000 (0:00:01.692) 0:01:07.493 ****** 2025-09-08 00:45:17.174520 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:17.174531 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:17.174541 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:17.174552 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.174765 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:17.174891 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:17.174906 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:17.174919 | orchestrator | 2025-09-08 00:45:17.174933 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-08 00:45:17.174975 | orchestrator | Monday 08 September 2025 00:45:05 +0000 (0:00:01.440) 0:01:08.933 ****** 2025-09-08 00:45:17.174987 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:17.174998 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:17.175008 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:17.175019 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:17.175030 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:17.175041 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:17.175052 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:17.175063 | orchestrator | 2025-09-08 00:45:17.175074 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-08 00:45:17.175085 | orchestrator | Monday 08 September 2025 00:45:07 +0000 (0:00:02.350) 0:01:11.284 ****** 2025-09-08 00:45:17.175098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-08 00:45:17.175112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:45:17.175126 | orchestrator | 2025-09-08 00:45:17.175137 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-08 00:45:17.175148 | orchestrator | Monday 08 September 2025 00:45:08 +0000 (0:00:01.374) 0:01:12.659 ****** 2025-09-08 00:45:17.175159 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.175170 | orchestrator | 2025-09-08 00:45:17.175181 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-08 00:45:17.175192 | orchestrator | Monday 08 September 2025 00:45:11 +0000 (0:00:02.086) 0:01:14.745 ****** 2025-09-08 00:45:17.175203 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:17.175213 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:17.175224 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:17.175234 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:17.175245 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:17.175255 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:17.175266 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:17.175276 | orchestrator | 2025-09-08 00:45:17.175287 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:45:17.175298 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.175311 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.175322 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.175370 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.175382 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.175393 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.175403 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:17.175414 | orchestrator | 2025-09-08 00:45:17.175425 | orchestrator | 2025-09-08 00:45:17.175436 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:45:17.175447 | orchestrator | Monday 08 September 2025 00:45:14 +0000 (0:00:03.089) 0:01:17.835 ****** 2025-09-08 00:45:17.175457 | orchestrator | =============================================================================== 2025-09-08 00:45:17.175468 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 32.85s 2025-09-08 00:45:17.175479 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.22s 2025-09-08 00:45:17.175489 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.10s 2025-09-08 00:45:17.175500 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.09s 2025-09-08 00:45:17.175510 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.87s 2025-09-08 00:45:17.175521 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.35s 2025-09-08 00:45:17.175532 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.31s 2025-09-08 00:45:17.175542 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.09s 2025-09-08 00:45:17.175553 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.08s 2025-09-08 00:45:17.175611 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.71s 2025-09-08 00:45:17.175623 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.69s 2025-09-08 00:45:17.175668 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.50s 2025-09-08 00:45:17.175680 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.44s 2025-09-08 00:45:17.175692 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.44s 2025-09-08 00:45:17.175703 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.37s 2025-09-08 00:45:17.175719 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.31s 2025-09-08 00:45:20.211320 | orchestrator | 2025-09-08 00:45:20 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:20.213702 | orchestrator | 2025-09-08 00:45:20 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:20.216940 | orchestrator | 2025-09-08 00:45:20 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:20.218236 | orchestrator | 2025-09-08 00:45:20 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:20.219261 | orchestrator | 2025-09-08 00:45:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:23.268765 | orchestrator | 2025-09-08 00:45:23 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:23.269446 | orchestrator | 2025-09-08 00:45:23 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:23.271789 | orchestrator | 2025-09-08 00:45:23 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:23.276984 | orchestrator | 2025-09-08 00:45:23 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state STARTED 2025-09-08 00:45:23.277507 | orchestrator | 2025-09-08 00:45:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:26.347618 | orchestrator | 2025-09-08 00:45:26 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:26.348713 | orchestrator | 2025-09-08 00:45:26 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:26.351552 | orchestrator | 2025-09-08 00:45:26 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:26.353112 | orchestrator | 2025-09-08 00:45:26 | INFO  | Task 731cf80b-f4b2-4c8a-9b83-61983bc5ac72 is in state SUCCESS 2025-09-08 00:45:26.353272 | orchestrator | 2025-09-08 00:45:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:29.390708 | orchestrator | 2025-09-08 00:45:29 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:29.390831 | orchestrator | 2025-09-08 00:45:29 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:29.391545 | orchestrator | 2025-09-08 00:45:29 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:29.391592 | orchestrator | 2025-09-08 00:45:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:32.463055 | orchestrator | 2025-09-08 00:45:32 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:32.464075 | orchestrator | 2025-09-08 00:45:32 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:32.467176 | orchestrator | 2025-09-08 00:45:32 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:32.467218 | orchestrator | 2025-09-08 00:45:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:35.510429 | orchestrator | 2025-09-08 00:45:35 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:35.513691 | orchestrator | 2025-09-08 00:45:35 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:35.515643 | orchestrator | 2025-09-08 00:45:35 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:35.515675 | orchestrator | 2025-09-08 00:45:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:38.570442 | orchestrator | 2025-09-08 00:45:38 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:38.571364 | orchestrator | 2025-09-08 00:45:38 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:38.573551 | orchestrator | 2025-09-08 00:45:38 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:38.573602 | orchestrator | 2025-09-08 00:45:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:41.617780 | orchestrator | 2025-09-08 00:45:41 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:41.618297 | orchestrator | 2025-09-08 00:45:41 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:41.619647 | orchestrator | 2025-09-08 00:45:41 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:41.619673 | orchestrator | 2025-09-08 00:45:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:44.668679 | orchestrator | 2025-09-08 00:45:44 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:44.670125 | orchestrator | 2025-09-08 00:45:44 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:44.671346 | orchestrator | 2025-09-08 00:45:44 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:44.671655 | orchestrator | 2025-09-08 00:45:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:47.709768 | orchestrator | 2025-09-08 00:45:47 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:47.709908 | orchestrator | 2025-09-08 00:45:47 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:47.711812 | orchestrator | 2025-09-08 00:45:47 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:47.712626 | orchestrator | 2025-09-08 00:45:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:50.762549 | orchestrator | 2025-09-08 00:45:50 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:50.763810 | orchestrator | 2025-09-08 00:45:50 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:50.765144 | orchestrator | 2025-09-08 00:45:50 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:50.765435 | orchestrator | 2025-09-08 00:45:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:53.814183 | orchestrator | 2025-09-08 00:45:53 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:53.814289 | orchestrator | 2025-09-08 00:45:53 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:53.814302 | orchestrator | 2025-09-08 00:45:53 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:53.814313 | orchestrator | 2025-09-08 00:45:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:56.889678 | orchestrator | 2025-09-08 00:45:56 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:56.891822 | orchestrator | 2025-09-08 00:45:56 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:56.894599 | orchestrator | 2025-09-08 00:45:56 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:56.895106 | orchestrator | 2025-09-08 00:45:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:59.961839 | orchestrator | 2025-09-08 00:45:59 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:45:59.961966 | orchestrator | 2025-09-08 00:45:59 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:45:59.964801 | orchestrator | 2025-09-08 00:45:59 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:45:59.964826 | orchestrator | 2025-09-08 00:45:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:03.011032 | orchestrator | 2025-09-08 00:46:03 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:03.011138 | orchestrator | 2025-09-08 00:46:03 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:03.011148 | orchestrator | 2025-09-08 00:46:03 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:46:03.011156 | orchestrator | 2025-09-08 00:46:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:06.055910 | orchestrator | 2025-09-08 00:46:06 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:06.056503 | orchestrator | 2025-09-08 00:46:06 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:06.057219 | orchestrator | 2025-09-08 00:46:06 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:46:06.057286 | orchestrator | 2025-09-08 00:46:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:09.109130 | orchestrator | 2025-09-08 00:46:09 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:09.110210 | orchestrator | 2025-09-08 00:46:09 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:09.111847 | orchestrator | 2025-09-08 00:46:09 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:46:09.111871 | orchestrator | 2025-09-08 00:46:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:12.141567 | orchestrator | 2025-09-08 00:46:12 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:12.141719 | orchestrator | 2025-09-08 00:46:12 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:12.141734 | orchestrator | 2025-09-08 00:46:12 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:46:12.141868 | orchestrator | 2025-09-08 00:46:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:15.180249 | orchestrator | 2025-09-08 00:46:15 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:15.181943 | orchestrator | 2025-09-08 00:46:15 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:15.184604 | orchestrator | 2025-09-08 00:46:15 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:46:15.184629 | orchestrator | 2025-09-08 00:46:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:18.218379 | orchestrator | 2025-09-08 00:46:18 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:18.220940 | orchestrator | 2025-09-08 00:46:18 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:18.222873 | orchestrator | 2025-09-08 00:46:18 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:46:18.222900 | orchestrator | 2025-09-08 00:46:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:21.267533 | orchestrator | 2025-09-08 00:46:21 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:21.269123 | orchestrator | 2025-09-08 00:46:21 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:21.271143 | orchestrator | 2025-09-08 00:46:21 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state STARTED 2025-09-08 00:46:21.271962 | orchestrator | 2025-09-08 00:46:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:24.315067 | orchestrator | 2025-09-08 00:46:24 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:24.315207 | orchestrator | 2025-09-08 00:46:24 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:24.319115 | orchestrator | 2025-09-08 00:46:24 | INFO  | Task bd2a0c15-fee9-484b-afd5-096c4bb2382a is in state SUCCESS 2025-09-08 00:46:24.320711 | orchestrator | 2025-09-08 00:46:24.320809 | orchestrator | 2025-09-08 00:46:24.321835 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-08 00:46:24.321862 | orchestrator | 2025-09-08 00:46:24.321874 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-08 00:46:24.321886 | orchestrator | Monday 08 September 2025 00:44:15 +0000 (0:00:00.198) 0:00:00.198 ****** 2025-09-08 00:46:24.321899 | orchestrator | ok: [testbed-manager] 2025-09-08 00:46:24.321911 | orchestrator | 2025-09-08 00:46:24.321923 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-08 00:46:24.321957 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:01.668) 0:00:01.866 ****** 2025-09-08 00:46:24.321969 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-08 00:46:24.321980 | orchestrator | 2025-09-08 00:46:24.321991 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-08 00:46:24.322002 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:00.711) 0:00:02.578 ****** 2025-09-08 00:46:24.322078 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.322092 | orchestrator | 2025-09-08 00:46:24.322103 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-08 00:46:24.322222 | orchestrator | Monday 08 September 2025 00:44:18 +0000 (0:00:01.070) 0:00:03.648 ****** 2025-09-08 00:46:24.322418 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-08 00:46:24.322432 | orchestrator | ok: [testbed-manager] 2025-09-08 00:46:24.322443 | orchestrator | 2025-09-08 00:46:24.322453 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-08 00:46:24.322464 | orchestrator | Monday 08 September 2025 00:45:16 +0000 (0:00:57.638) 0:01:01.286 ****** 2025-09-08 00:46:24.322475 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.322486 | orchestrator | 2025-09-08 00:46:24.322496 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:46:24.322507 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:24.322519 | orchestrator | 2025-09-08 00:46:24.322530 | orchestrator | 2025-09-08 00:46:24.322541 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:46:24.322552 | orchestrator | Monday 08 September 2025 00:45:22 +0000 (0:00:06.474) 0:01:07.761 ****** 2025-09-08 00:46:24.322562 | orchestrator | =============================================================================== 2025-09-08 00:46:24.322573 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 57.64s 2025-09-08 00:46:24.322621 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.47s 2025-09-08 00:46:24.322633 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.67s 2025-09-08 00:46:24.322652 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.07s 2025-09-08 00:46:24.322663 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.71s 2025-09-08 00:46:24.322674 | orchestrator | 2025-09-08 00:46:24.322685 | orchestrator | 2025-09-08 00:46:24.322696 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-08 00:46:24.322706 | orchestrator | 2025-09-08 00:46:24.322717 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-08 00:46:24.322728 | orchestrator | Monday 08 September 2025 00:43:50 +0000 (0:00:00.281) 0:00:00.281 ****** 2025-09-08 00:46:24.322739 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:46:24.322753 | orchestrator | 2025-09-08 00:46:24.322763 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-08 00:46:24.322774 | orchestrator | Monday 08 September 2025 00:43:52 +0000 (0:00:01.397) 0:00:01.679 ****** 2025-09-08 00:46:24.322785 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:24.322795 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:24.322806 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:24.322817 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:24.322828 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:24.322839 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:24.322861 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:24.322871 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:24.322882 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:24.322895 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:24.322905 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:24.322916 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:24.322927 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:24.322938 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:24.322949 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:24.322959 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:24.323013 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:24.323026 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:24.323038 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:24.323051 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:24.323064 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:24.323077 | orchestrator | 2025-09-08 00:46:24.323091 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-08 00:46:24.323103 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:04.159) 0:00:05.838 ****** 2025-09-08 00:46:24.323115 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:46:24.323129 | orchestrator | 2025-09-08 00:46:24.323142 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-08 00:46:24.323154 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:01.425) 0:00:07.264 ****** 2025-09-08 00:46:24.323172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.323196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.323210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.323230 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.323245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.323351 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.323368 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.323401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323537 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.323572 | orchestrator | 2025-09-08 00:46:24.323638 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-08 00:46:24.323650 | orchestrator | Monday 08 September 2025 00:44:03 +0000 (0:00:05.885) 0:00:13.150 ****** 2025-09-08 00:46:24.323697 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.323712 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323723 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.323759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.323794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323823 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:24.323834 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:24.323846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.323857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.323902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323925 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:24.323936 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:24.323946 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:24.323957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.323977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.323989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324000 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:24.324021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324055 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:24.324066 | orchestrator | 2025-09-08 00:46:24.324077 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-08 00:46:24.324088 | orchestrator | Monday 08 September 2025 00:44:05 +0000 (0:00:01.544) 0:00:14.694 ****** 2025-09-08 00:46:24.324099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324151 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:24.324161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324178 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324188 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324198 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:24.324212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324242 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:24.324252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324305 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:24.324319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324382 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:24.324392 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:24.324402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:24.324418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.324438 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:24.324448 | orchestrator | 2025-09-08 00:46:24.324458 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-08 00:46:24.324468 | orchestrator | Monday 08 September 2025 00:44:07 +0000 (0:00:02.712) 0:00:17.407 ****** 2025-09-08 00:46:24.324477 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:24.324487 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:24.324500 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:24.324510 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:24.324520 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:24.324529 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:24.324538 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:24.324548 | orchestrator | 2025-09-08 00:46:24.324557 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-08 00:46:24.324567 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:02.093) 0:00:19.501 ****** 2025-09-08 00:46:24.324576 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:24.324603 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:24.324612 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:24.324622 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:24.324631 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:24.324640 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:24.324650 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:24.324659 | orchestrator | 2025-09-08 00:46:24.324669 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-08 00:46:24.324678 | orchestrator | Monday 08 September 2025 00:44:11 +0000 (0:00:01.934) 0:00:21.435 ****** 2025-09-08 00:46:24.324688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.324698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.324721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.324732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.324742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.324752 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.324766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.324776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324838 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324924 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.324944 | orchestrator | 2025-09-08 00:46:24.324954 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-08 00:46:24.324964 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:05.576) 0:00:27.012 ****** 2025-09-08 00:46:24.324974 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:24.324984 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-08 00:46:24.324994 | orchestrator | to this access issue: 2025-09-08 00:46:24.325003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-08 00:46:24.325013 | orchestrator | directory 2025-09-08 00:46:24.325023 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:24.325033 | orchestrator | 2025-09-08 00:46:24.325042 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-08 00:46:24.325052 | orchestrator | Monday 08 September 2025 00:44:18 +0000 (0:00:01.460) 0:00:28.473 ****** 2025-09-08 00:46:24.325062 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:24.325076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-08 00:46:24.325086 | orchestrator | to this access issue: 2025-09-08 00:46:24.325095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-08 00:46:24.325105 | orchestrator | directory 2025-09-08 00:46:24.325115 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:24.325124 | orchestrator | 2025-09-08 00:46:24.325134 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-08 00:46:24.325149 | orchestrator | Monday 08 September 2025 00:44:20 +0000 (0:00:01.517) 0:00:29.991 ****** 2025-09-08 00:46:24.325159 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:24.325169 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-08 00:46:24.325178 | orchestrator | to this access issue: 2025-09-08 00:46:24.325188 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-08 00:46:24.325198 | orchestrator | directory 2025-09-08 00:46:24.325208 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:24.325217 | orchestrator | 2025-09-08 00:46:24.325227 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-08 00:46:24.325236 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.814) 0:00:30.805 ****** 2025-09-08 00:46:24.325246 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:24.325256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-08 00:46:24.325265 | orchestrator | to this access issue: 2025-09-08 00:46:24.325275 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-08 00:46:24.325285 | orchestrator | directory 2025-09-08 00:46:24.325294 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:24.325304 | orchestrator | 2025-09-08 00:46:24.325313 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-08 00:46:24.325323 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.744) 0:00:31.550 ****** 2025-09-08 00:46:24.325332 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:24.325342 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.325352 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:24.325361 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:24.325371 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:24.325380 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:24.325390 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:24.325399 | orchestrator | 2025-09-08 00:46:24.325409 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-08 00:46:24.325419 | orchestrator | Monday 08 September 2025 00:44:25 +0000 (0:00:03.485) 0:00:35.036 ****** 2025-09-08 00:46:24.325429 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:24.325438 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:24.325448 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:24.325462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:24.325472 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:24.325482 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:24.325491 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:24.325501 | orchestrator | 2025-09-08 00:46:24.325511 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-08 00:46:24.325520 | orchestrator | Monday 08 September 2025 00:44:28 +0000 (0:00:02.872) 0:00:37.908 ****** 2025-09-08 00:46:24.325530 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.325540 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:24.325549 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:24.325559 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:24.325568 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:24.325592 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:24.325601 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:24.325611 | orchestrator | 2025-09-08 00:46:24.325621 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-08 00:46:24.325636 | orchestrator | Monday 08 September 2025 00:44:30 +0000 (0:00:02.294) 0:00:40.203 ****** 2025-09-08 00:46:24.325646 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.325664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.325675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.325685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.325695 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.325711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.325721 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.325741 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.325751 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.325761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.325771 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.325781 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.325795 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.325810 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.325821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.325836 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.325846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:24.325860 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.325870 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.325880 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.325891 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.325900 | orchestrator | 2025-09-08 00:46:24.325910 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-08 00:46:24.325920 | orchestrator | Monday 08 September 2025 00:44:32 +0000 (0:00:02.348) 0:00:42.551 ****** 2025-09-08 00:46:24.325930 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:24.325939 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:24.325949 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:24.325967 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:24.325982 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:24.325992 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:24.326002 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:24.326011 | orchestrator | 2025-09-08 00:46:24.326055 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-08 00:46:24.326065 | orchestrator | Monday 08 September 2025 00:44:35 +0000 (0:00:02.434) 0:00:44.986 ****** 2025-09-08 00:46:24.326075 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:24.326085 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:24.326094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:24.326104 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:24.326114 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:24.326123 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:24.326133 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:24.326143 | orchestrator | 2025-09-08 00:46:24.326152 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-08 00:46:24.326162 | orchestrator | Monday 08 September 2025 00:44:38 +0000 (0:00:02.802) 0:00:47.788 ****** 2025-09-08 00:46:24.326172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.326189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.326199 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.326210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.326242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.326263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.326289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:24.326309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326329 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326384 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:24.326429 | orchestrator | 2025-09-08 00:46:24.326443 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-08 00:46:24.326453 | orchestrator | Monday 08 September 2025 00:44:42 +0000 (0:00:04.226) 0:00:52.015 ****** 2025-09-08 00:46:24.326463 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:24.326473 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:24.326482 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:24.326492 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.326501 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:24.326511 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:24.326521 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:24.326530 | orchestrator | 2025-09-08 00:46:24.326540 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-08 00:46:24.326549 | orchestrator | Monday 08 September 2025 00:44:44 +0000 (0:00:02.408) 0:00:54.424 ****** 2025-09-08 00:46:24.326559 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:24.326568 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.326623 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:24.326635 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:24.326645 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:24.326655 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:24.326664 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:24.326674 | orchestrator | 2025-09-08 00:46:24.326684 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:24.326694 | orchestrator | Monday 08 September 2025 00:44:46 +0000 (0:00:01.919) 0:00:56.344 ****** 2025-09-08 00:46:24.326703 | orchestrator | 2025-09-08 00:46:24.326713 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:24.326723 | orchestrator | Monday 08 September 2025 00:44:46 +0000 (0:00:00.071) 0:00:56.415 ****** 2025-09-08 00:46:24.326732 | orchestrator | 2025-09-08 00:46:24.326742 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:24.326752 | orchestrator | Monday 08 September 2025 00:44:46 +0000 (0:00:00.059) 0:00:56.475 ****** 2025-09-08 00:46:24.326761 | orchestrator | 2025-09-08 00:46:24.326771 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:24.326781 | orchestrator | Monday 08 September 2025 00:44:46 +0000 (0:00:00.070) 0:00:56.546 ****** 2025-09-08 00:46:24.326790 | orchestrator | 2025-09-08 00:46:24.326800 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:24.326810 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:00.278) 0:00:56.825 ****** 2025-09-08 00:46:24.326820 | orchestrator | 2025-09-08 00:46:24.326829 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:24.326839 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:00.084) 0:00:56.909 ****** 2025-09-08 00:46:24.326849 | orchestrator | 2025-09-08 00:46:24.326859 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:24.326868 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:00.069) 0:00:56.979 ****** 2025-09-08 00:46:24.326883 | orchestrator | 2025-09-08 00:46:24.326895 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-08 00:46:24.326903 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:00.100) 0:00:57.079 ****** 2025-09-08 00:46:24.326911 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:24.326919 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:24.326927 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:24.326935 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:24.326943 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:24.326951 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.326959 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:24.326967 | orchestrator | 2025-09-08 00:46:24.326975 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-08 00:46:24.326983 | orchestrator | Monday 08 September 2025 00:45:28 +0000 (0:00:41.369) 0:01:38.448 ****** 2025-09-08 00:46:24.326991 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:24.326999 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:24.327006 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:24.327014 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.327022 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:24.327030 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:24.327038 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:24.327046 | orchestrator | 2025-09-08 00:46:24.327054 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-08 00:46:24.327062 | orchestrator | Monday 08 September 2025 00:46:12 +0000 (0:00:43.227) 0:02:21.675 ****** 2025-09-08 00:46:24.327070 | orchestrator | ok: [testbed-manager] 2025-09-08 00:46:24.327078 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:46:24.327086 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:46:24.327094 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:46:24.327102 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:46:24.327110 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:46:24.327118 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:46:24.327126 | orchestrator | 2025-09-08 00:46:24.327134 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-08 00:46:24.327142 | orchestrator | Monday 08 September 2025 00:46:14 +0000 (0:00:01.984) 0:02:23.660 ****** 2025-09-08 00:46:24.327150 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:24.327158 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:24.327165 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:24.327173 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:24.327181 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:24.327189 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:24.327197 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:24.327205 | orchestrator | 2025-09-08 00:46:24.327213 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:46:24.327221 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:24.327230 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:24.327243 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:24.327251 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:24.327259 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:24.327267 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:24.327280 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:24.327288 | orchestrator | 2025-09-08 00:46:24.327296 | orchestrator | 2025-09-08 00:46:24.327305 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:46:24.327313 | orchestrator | Monday 08 September 2025 00:46:23 +0000 (0:00:09.398) 0:02:33.059 ****** 2025-09-08 00:46:24.327321 | orchestrator | =============================================================================== 2025-09-08 00:46:24.327329 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 43.23s 2025-09-08 00:46:24.327337 | orchestrator | common : Restart fluentd container ------------------------------------- 41.37s 2025-09-08 00:46:24.327345 | orchestrator | common : Restart cron container ----------------------------------------- 9.40s 2025-09-08 00:46:24.327353 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.89s 2025-09-08 00:46:24.327361 | orchestrator | common : Copying over config.json files for services -------------------- 5.58s 2025-09-08 00:46:24.327368 | orchestrator | common : Check common containers ---------------------------------------- 4.23s 2025-09-08 00:46:24.327376 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.16s 2025-09-08 00:46:24.327384 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.49s 2025-09-08 00:46:24.327392 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.87s 2025-09-08 00:46:24.327400 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.80s 2025-09-08 00:46:24.327408 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.71s 2025-09-08 00:46:24.327416 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.43s 2025-09-08 00:46:24.327430 | orchestrator | common : Creating log volume -------------------------------------------- 2.41s 2025-09-08 00:46:24.327438 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.35s 2025-09-08 00:46:24.327446 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.29s 2025-09-08 00:46:24.327454 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.09s 2025-09-08 00:46:24.327462 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.98s 2025-09-08 00:46:24.327470 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.93s 2025-09-08 00:46:24.327478 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.92s 2025-09-08 00:46:24.327486 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.54s 2025-09-08 00:46:24.327494 | orchestrator | 2025-09-08 00:46:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:27.385482 | orchestrator | 2025-09-08 00:46:27 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:27.388564 | orchestrator | 2025-09-08 00:46:27 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:27.393143 | orchestrator | 2025-09-08 00:46:27 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:27.393170 | orchestrator | 2025-09-08 00:46:27 | INFO  | Task bc1a7e01-e51d-494f-b4f1-b0d5355d5a66 is in state STARTED 2025-09-08 00:46:27.396729 | orchestrator | 2025-09-08 00:46:27 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:27.398378 | orchestrator | 2025-09-08 00:46:27 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:27.398896 | orchestrator | 2025-09-08 00:46:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:30.432032 | orchestrator | 2025-09-08 00:46:30 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:30.434131 | orchestrator | 2025-09-08 00:46:30 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:30.435864 | orchestrator | 2025-09-08 00:46:30 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:30.437050 | orchestrator | 2025-09-08 00:46:30 | INFO  | Task bc1a7e01-e51d-494f-b4f1-b0d5355d5a66 is in state STARTED 2025-09-08 00:46:30.437072 | orchestrator | 2025-09-08 00:46:30 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:30.437481 | orchestrator | 2025-09-08 00:46:30 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:30.437502 | orchestrator | 2025-09-08 00:46:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:33.555471 | orchestrator | 2025-09-08 00:46:33 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:33.557671 | orchestrator | 2025-09-08 00:46:33 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:33.557702 | orchestrator | 2025-09-08 00:46:33 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:33.557715 | orchestrator | 2025-09-08 00:46:33 | INFO  | Task bc1a7e01-e51d-494f-b4f1-b0d5355d5a66 is in state STARTED 2025-09-08 00:46:33.559190 | orchestrator | 2025-09-08 00:46:33 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:33.559815 | orchestrator | 2025-09-08 00:46:33 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:33.559835 | orchestrator | 2025-09-08 00:46:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:36.600858 | orchestrator | 2025-09-08 00:46:36 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:36.601141 | orchestrator | 2025-09-08 00:46:36 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:36.601853 | orchestrator | 2025-09-08 00:46:36 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:36.602679 | orchestrator | 2025-09-08 00:46:36 | INFO  | Task bc1a7e01-e51d-494f-b4f1-b0d5355d5a66 is in state STARTED 2025-09-08 00:46:36.603303 | orchestrator | 2025-09-08 00:46:36 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:36.604125 | orchestrator | 2025-09-08 00:46:36 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:36.604924 | orchestrator | 2025-09-08 00:46:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:39.637013 | orchestrator | 2025-09-08 00:46:39 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:39.637344 | orchestrator | 2025-09-08 00:46:39 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:39.638156 | orchestrator | 2025-09-08 00:46:39 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:39.639630 | orchestrator | 2025-09-08 00:46:39 | INFO  | Task bc1a7e01-e51d-494f-b4f1-b0d5355d5a66 is in state STARTED 2025-09-08 00:46:39.640342 | orchestrator | 2025-09-08 00:46:39 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:39.641774 | orchestrator | 2025-09-08 00:46:39 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:39.642013 | orchestrator | 2025-09-08 00:46:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:42.707556 | orchestrator | 2025-09-08 00:46:42 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:42.709872 | orchestrator | 2025-09-08 00:46:42 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:42.712104 | orchestrator | 2025-09-08 00:46:42 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:42.775420 | orchestrator | 2025-09-08 00:46:42 | INFO  | Task bc1a7e01-e51d-494f-b4f1-b0d5355d5a66 is in state STARTED 2025-09-08 00:46:42.775491 | orchestrator | 2025-09-08 00:46:42 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:42.775504 | orchestrator | 2025-09-08 00:46:42 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:42.775517 | orchestrator | 2025-09-08 00:46:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:45.775101 | orchestrator | 2025-09-08 00:46:45 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:45.775209 | orchestrator | 2025-09-08 00:46:45 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:45.775223 | orchestrator | 2025-09-08 00:46:45 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:45.775235 | orchestrator | 2025-09-08 00:46:45 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:46:45.775246 | orchestrator | 2025-09-08 00:46:45 | INFO  | Task bc1a7e01-e51d-494f-b4f1-b0d5355d5a66 is in state SUCCESS 2025-09-08 00:46:45.775257 | orchestrator | 2025-09-08 00:46:45 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:45.775268 | orchestrator | 2025-09-08 00:46:45 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:45.775279 | orchestrator | 2025-09-08 00:46:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:48.802393 | orchestrator | 2025-09-08 00:46:48 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:48.802942 | orchestrator | 2025-09-08 00:46:48 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:48.803680 | orchestrator | 2025-09-08 00:46:48 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:48.804181 | orchestrator | 2025-09-08 00:46:48 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:46:48.805089 | orchestrator | 2025-09-08 00:46:48 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:48.805804 | orchestrator | 2025-09-08 00:46:48 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:48.805904 | orchestrator | 2025-09-08 00:46:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:51.834815 | orchestrator | 2025-09-08 00:46:51 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:51.834921 | orchestrator | 2025-09-08 00:46:51 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:51.834937 | orchestrator | 2025-09-08 00:46:51 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:51.834948 | orchestrator | 2025-09-08 00:46:51 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:46:51.835114 | orchestrator | 2025-09-08 00:46:51 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:51.836262 | orchestrator | 2025-09-08 00:46:51 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:51.836283 | orchestrator | 2025-09-08 00:46:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:54.873064 | orchestrator | 2025-09-08 00:46:54 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:54.873310 | orchestrator | 2025-09-08 00:46:54 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:54.873920 | orchestrator | 2025-09-08 00:46:54 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:54.874496 | orchestrator | 2025-09-08 00:46:54 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:46:54.875119 | orchestrator | 2025-09-08 00:46:54 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state STARTED 2025-09-08 00:46:54.875969 | orchestrator | 2025-09-08 00:46:54 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:54.876001 | orchestrator | 2025-09-08 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:57.908450 | orchestrator | 2025-09-08 00:46:57 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:46:57.909125 | orchestrator | 2025-09-08 00:46:57 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:46:57.910326 | orchestrator | 2025-09-08 00:46:57 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:46:57.911778 | orchestrator | 2025-09-08 00:46:57 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:46:57.913771 | orchestrator | 2025-09-08 00:46:57 | INFO  | Task 7a668839-d689-4e67-aa50-57494c60ba45 is in state SUCCESS 2025-09-08 00:46:57.914324 | orchestrator | 2025-09-08 00:46:57.914351 | orchestrator | 2025-09-08 00:46:57.914362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:46:57.914374 | orchestrator | 2025-09-08 00:46:57.914385 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:46:57.914396 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.329) 0:00:00.329 ****** 2025-09-08 00:46:57.914407 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:46:57.914419 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:46:57.914429 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:46:57.914440 | orchestrator | 2025-09-08 00:46:57.914451 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:46:57.914462 | orchestrator | Monday 08 September 2025 00:46:30 +0000 (0:00:00.429) 0:00:00.759 ****** 2025-09-08 00:46:57.914474 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-08 00:46:57.914485 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-08 00:46:57.914496 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-08 00:46:57.914506 | orchestrator | 2025-09-08 00:46:57.914517 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-08 00:46:57.914528 | orchestrator | 2025-09-08 00:46:57.914539 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-08 00:46:57.914550 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:00:00.634) 0:00:01.393 ****** 2025-09-08 00:46:57.914560 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:46:57.914571 | orchestrator | 2025-09-08 00:46:57.914582 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-08 00:46:57.914622 | orchestrator | Monday 08 September 2025 00:46:32 +0000 (0:00:01.472) 0:00:02.866 ****** 2025-09-08 00:46:57.914634 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-08 00:46:57.914645 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-08 00:46:57.914655 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-08 00:46:57.914666 | orchestrator | 2025-09-08 00:46:57.914677 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-08 00:46:57.914688 | orchestrator | Monday 08 September 2025 00:46:33 +0000 (0:00:01.213) 0:00:04.079 ****** 2025-09-08 00:46:57.914723 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-08 00:46:57.914734 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-08 00:46:57.914744 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-08 00:46:57.914755 | orchestrator | 2025-09-08 00:46:57.914766 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-08 00:46:57.914777 | orchestrator | Monday 08 September 2025 00:46:36 +0000 (0:00:02.628) 0:00:06.708 ****** 2025-09-08 00:46:57.914787 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:57.914798 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:57.914809 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:57.914819 | orchestrator | 2025-09-08 00:46:57.914830 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-08 00:46:57.914840 | orchestrator | Monday 08 September 2025 00:46:38 +0000 (0:00:01.867) 0:00:08.575 ****** 2025-09-08 00:46:57.914851 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:57.914862 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:57.914872 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:57.914883 | orchestrator | 2025-09-08 00:46:57.914895 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:46:57.914908 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:57.914923 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:57.914936 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:57.914948 | orchestrator | 2025-09-08 00:46:57.914994 | orchestrator | 2025-09-08 00:46:57.915007 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:46:57.915065 | orchestrator | Monday 08 September 2025 00:46:41 +0000 (0:00:02.850) 0:00:11.426 ****** 2025-09-08 00:46:57.915078 | orchestrator | =============================================================================== 2025-09-08 00:46:57.915090 | orchestrator | memcached : Restart memcached container --------------------------------- 2.85s 2025-09-08 00:46:57.915173 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.63s 2025-09-08 00:46:57.915186 | orchestrator | memcached : Check memcached container ----------------------------------- 1.87s 2025-09-08 00:46:57.915199 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.47s 2025-09-08 00:46:57.915211 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.21s 2025-09-08 00:46:57.915223 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-08 00:46:57.915236 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-09-08 00:46:57.916081 | orchestrator | 2025-09-08 00:46:57.916107 | orchestrator | 2025-09-08 00:46:57.916118 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:46:57.916129 | orchestrator | 2025-09-08 00:46:57.916141 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:46:57.916152 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.346) 0:00:00.346 ****** 2025-09-08 00:46:57.916163 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:46:57.916174 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:46:57.916185 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:46:57.916196 | orchestrator | 2025-09-08 00:46:57.916207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:46:57.916227 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.467) 0:00:00.813 ****** 2025-09-08 00:46:57.916239 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-08 00:46:57.916250 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-08 00:46:57.916261 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-08 00:46:57.916283 | orchestrator | 2025-09-08 00:46:57.916294 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-08 00:46:57.916305 | orchestrator | 2025-09-08 00:46:57.916316 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-08 00:46:57.916327 | orchestrator | Monday 08 September 2025 00:46:30 +0000 (0:00:00.631) 0:00:01.445 ****** 2025-09-08 00:46:57.916338 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:46:57.916349 | orchestrator | 2025-09-08 00:46:57.916360 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-08 00:46:57.916371 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:00:00.754) 0:00:02.199 ****** 2025-09-08 00:46:57.916385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916482 | orchestrator | 2025-09-08 00:46:57.916493 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-08 00:46:57.916504 | orchestrator | Monday 08 September 2025 00:46:32 +0000 (0:00:01.678) 0:00:03.878 ****** 2025-09-08 00:46:57.916516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916627 | orchestrator | 2025-09-08 00:46:57.916638 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-08 00:46:57.916649 | orchestrator | Monday 08 September 2025 00:46:36 +0000 (0:00:03.709) 0:00:07.588 ****** 2025-09-08 00:46:57.916660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916758 | orchestrator | 2025-09-08 00:46:57.916770 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-08 00:46:57.916782 | orchestrator | Monday 08 September 2025 00:46:39 +0000 (0:00:03.054) 0:00:10.642 ****** 2025-09-08 00:46:57.916796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:57.916892 | orchestrator | 2025-09-08 00:46:57.916904 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-08 00:46:57.916917 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:02.520) 0:00:13.163 ****** 2025-09-08 00:46:57.916930 | orchestrator | 2025-09-08 00:46:57.916942 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-08 00:46:57.916955 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:00.302) 0:00:13.465 ****** 2025-09-08 00:46:57.916967 | orchestrator | 2025-09-08 00:46:57.916980 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-08 00:46:57.916992 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:00.203) 0:00:13.669 ****** 2025-09-08 00:46:57.917005 | orchestrator | 2025-09-08 00:46:57.917018 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-08 00:46:57.917029 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:00.117) 0:00:13.786 ****** 2025-09-08 00:46:57.917040 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:57.917051 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:57.917061 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:57.917072 | orchestrator | 2025-09-08 00:46:57.917083 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-08 00:46:57.917094 | orchestrator | Monday 08 September 2025 00:46:46 +0000 (0:00:04.134) 0:00:17.921 ****** 2025-09-08 00:46:57.917104 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:57.917115 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:57.917126 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:57.917136 | orchestrator | 2025-09-08 00:46:57.917147 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:46:57.917158 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:57.917170 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:57.917181 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:57.917192 | orchestrator | 2025-09-08 00:46:57.917203 | orchestrator | 2025-09-08 00:46:57.917214 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:46:57.917224 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:09.286) 0:00:27.207 ****** 2025-09-08 00:46:57.917235 | orchestrator | =============================================================================== 2025-09-08 00:46:57.917246 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.29s 2025-09-08 00:46:57.917256 | orchestrator | redis : Restart redis container ----------------------------------------- 4.13s 2025-09-08 00:46:57.917267 | orchestrator | redis : Copying over default config.json files -------------------------- 3.71s 2025-09-08 00:46:57.917278 | orchestrator | redis : Copying over redis config files --------------------------------- 3.05s 2025-09-08 00:46:57.917288 | orchestrator | redis : Check redis containers ------------------------------------------ 2.52s 2025-09-08 00:46:57.917299 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.68s 2025-09-08 00:46:57.917315 | orchestrator | redis : include_tasks --------------------------------------------------- 0.75s 2025-09-08 00:46:57.917326 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-08 00:46:57.917336 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.62s 2025-09-08 00:46:57.917347 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-09-08 00:46:57.917358 | orchestrator | 2025-09-08 00:46:57 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:46:57.917369 | orchestrator | 2025-09-08 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:00.946294 | orchestrator | 2025-09-08 00:47:00 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:00.946575 | orchestrator | 2025-09-08 00:47:00 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:00.962062 | orchestrator | 2025-09-08 00:47:00 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:00.962103 | orchestrator | 2025-09-08 00:47:00 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:00.962114 | orchestrator | 2025-09-08 00:47:00 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:00.962126 | orchestrator | 2025-09-08 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:04.005962 | orchestrator | 2025-09-08 00:47:04 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:04.006867 | orchestrator | 2025-09-08 00:47:04 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:04.007163 | orchestrator | 2025-09-08 00:47:04 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:04.008138 | orchestrator | 2025-09-08 00:47:04 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:04.008953 | orchestrator | 2025-09-08 00:47:04 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:04.008975 | orchestrator | 2025-09-08 00:47:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:07.070317 | orchestrator | 2025-09-08 00:47:07 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:07.070840 | orchestrator | 2025-09-08 00:47:07 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:07.072958 | orchestrator | 2025-09-08 00:47:07 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:07.073457 | orchestrator | 2025-09-08 00:47:07 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:07.074216 | orchestrator | 2025-09-08 00:47:07 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:07.074788 | orchestrator | 2025-09-08 00:47:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:10.182385 | orchestrator | 2025-09-08 00:47:10 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:10.182494 | orchestrator | 2025-09-08 00:47:10 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:10.183343 | orchestrator | 2025-09-08 00:47:10 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:10.189069 | orchestrator | 2025-09-08 00:47:10 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:10.194107 | orchestrator | 2025-09-08 00:47:10 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:10.194161 | orchestrator | 2025-09-08 00:47:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:13.214221 | orchestrator | 2025-09-08 00:47:13 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:13.214824 | orchestrator | 2025-09-08 00:47:13 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:13.215214 | orchestrator | 2025-09-08 00:47:13 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:13.217640 | orchestrator | 2025-09-08 00:47:13 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:13.218164 | orchestrator | 2025-09-08 00:47:13 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:13.218189 | orchestrator | 2025-09-08 00:47:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:16.507937 | orchestrator | 2025-09-08 00:47:16 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:16.510270 | orchestrator | 2025-09-08 00:47:16 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:16.512377 | orchestrator | 2025-09-08 00:47:16 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:16.517376 | orchestrator | 2025-09-08 00:47:16 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:16.517402 | orchestrator | 2025-09-08 00:47:16 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:16.517414 | orchestrator | 2025-09-08 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:19.709623 | orchestrator | 2025-09-08 00:47:19 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:19.709734 | orchestrator | 2025-09-08 00:47:19 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:19.709749 | orchestrator | 2025-09-08 00:47:19 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:19.709765 | orchestrator | 2025-09-08 00:47:19 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:19.709777 | orchestrator | 2025-09-08 00:47:19 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:19.709788 | orchestrator | 2025-09-08 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:22.744960 | orchestrator | 2025-09-08 00:47:22 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:22.745061 | orchestrator | 2025-09-08 00:47:22 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state STARTED 2025-09-08 00:47:22.745075 | orchestrator | 2025-09-08 00:47:22 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:22.745086 | orchestrator | 2025-09-08 00:47:22 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:22.745097 | orchestrator | 2025-09-08 00:47:22 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:22.745108 | orchestrator | 2025-09-08 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:25.838799 | orchestrator | 2025-09-08 00:47:25 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:25.838923 | orchestrator | 2025-09-08 00:47:25 | INFO  | Task cdf7b97f-1879-41ba-84f5-6ad0e31a2aea is in state SUCCESS 2025-09-08 00:47:25.838938 | orchestrator | 2025-09-08 00:47:25 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:25.839012 | orchestrator | 2025-09-08 00:47:25 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:25.839025 | orchestrator | 2025-09-08 00:47:25 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:25.839036 | orchestrator | 2025-09-08 00:47:25 | INFO  | Task 0b8dcaa0-c70f-4c00-ad39-53633c045b64 is in state STARTED 2025-09-08 00:47:25.839047 | orchestrator | 2025-09-08 00:47:25 | INFO  | Task 06ece321-4971-498a-875b-238099645e2c is in state STARTED 2025-09-08 00:47:25.839058 | orchestrator | 2025-09-08 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:25.839845 | orchestrator | 2025-09-08 00:47:25.839874 | orchestrator | 2025-09-08 00:47:25.840033 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-08 00:47:25.840061 | orchestrator | 2025-09-08 00:47:25.840073 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-08 00:47:25.840084 | orchestrator | Monday 08 September 2025 00:43:50 +0000 (0:00:00.179) 0:00:00.179 ****** 2025-09-08 00:47:25.840095 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.840108 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.840119 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.840234 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.840246 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.840256 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.840267 | orchestrator | 2025-09-08 00:47:25.840278 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-08 00:47:25.840289 | orchestrator | Monday 08 September 2025 00:43:51 +0000 (0:00:00.765) 0:00:00.945 ****** 2025-09-08 00:47:25.840300 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.840312 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.840324 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.840334 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.840345 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.840356 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.840366 | orchestrator | 2025-09-08 00:47:25.840377 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-08 00:47:25.840388 | orchestrator | Monday 08 September 2025 00:43:52 +0000 (0:00:00.750) 0:00:01.695 ****** 2025-09-08 00:47:25.840399 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.840410 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.840420 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.840431 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.840441 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.840452 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.840463 | orchestrator | 2025-09-08 00:47:25.840474 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-08 00:47:25.840484 | orchestrator | Monday 08 September 2025 00:43:52 +0000 (0:00:00.727) 0:00:02.422 ****** 2025-09-08 00:47:25.840495 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.840506 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.840516 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.840527 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.840538 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.840548 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.840559 | orchestrator | 2025-09-08 00:47:25.840570 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-08 00:47:25.840581 | orchestrator | Monday 08 September 2025 00:43:55 +0000 (0:00:02.116) 0:00:04.539 ****** 2025-09-08 00:47:25.840618 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.840630 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.840659 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.840670 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.840738 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.840749 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.840776 | orchestrator | 2025-09-08 00:47:25.840787 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-08 00:47:25.840798 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:01.654) 0:00:06.193 ****** 2025-09-08 00:47:25.840809 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.840819 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.840830 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.840841 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.840851 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.840864 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.840876 | orchestrator | 2025-09-08 00:47:25.840889 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-08 00:47:25.840901 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:01.190) 0:00:07.384 ****** 2025-09-08 00:47:25.840914 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.840926 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.840938 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.840951 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.840964 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.840976 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.840988 | orchestrator | 2025-09-08 00:47:25.841016 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-08 00:47:25.841031 | orchestrator | Monday 08 September 2025 00:43:58 +0000 (0:00:00.896) 0:00:08.281 ****** 2025-09-08 00:47:25.841044 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.841056 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.841069 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.841082 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.841094 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.841107 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.841119 | orchestrator | 2025-09-08 00:47:25.841132 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-08 00:47:25.841145 | orchestrator | Monday 08 September 2025 00:43:59 +0000 (0:00:00.851) 0:00:09.132 ****** 2025-09-08 00:47:25.841158 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:25.841171 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:25.841184 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.841197 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:25.841209 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:25.841221 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.841231 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:25.841242 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:25.841253 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.841264 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:25.841285 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:25.841296 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.841307 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:25.841317 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:25.841328 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.841339 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:25.841349 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:25.841360 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.841371 | orchestrator | 2025-09-08 00:47:25.841381 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-08 00:47:25.841400 | orchestrator | Monday 08 September 2025 00:44:00 +0000 (0:00:00.740) 0:00:09.873 ****** 2025-09-08 00:47:25.841411 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.841422 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.841433 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.841443 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.841454 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.841465 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.841475 | orchestrator | 2025-09-08 00:47:25.841486 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-08 00:47:25.841498 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:01.502) 0:00:11.375 ****** 2025-09-08 00:47:25.841509 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.841520 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.841531 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.841541 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.841552 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.841563 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.841573 | orchestrator | 2025-09-08 00:47:25.841584 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-08 00:47:25.841617 | orchestrator | Monday 08 September 2025 00:44:03 +0000 (0:00:01.141) 0:00:12.516 ****** 2025-09-08 00:47:25.841628 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.841638 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.841649 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.841660 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.841670 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.841681 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.841692 | orchestrator | 2025-09-08 00:47:25.841702 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-08 00:47:25.841713 | orchestrator | Monday 08 September 2025 00:44:08 +0000 (0:00:05.504) 0:00:18.021 ****** 2025-09-08 00:47:25.841724 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.841741 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.841751 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.841762 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.841773 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.841784 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.841794 | orchestrator | 2025-09-08 00:47:25.841805 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-08 00:47:25.841816 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:01.345) 0:00:19.367 ****** 2025-09-08 00:47:25.841826 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.841837 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.841848 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.841858 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.841869 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.841879 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.841890 | orchestrator | 2025-09-08 00:47:25.841901 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-08 00:47:25.841913 | orchestrator | Monday 08 September 2025 00:44:12 +0000 (0:00:02.171) 0:00:21.538 ****** 2025-09-08 00:47:25.841924 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.841935 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.841946 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.841956 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.841967 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.841977 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.841988 | orchestrator | 2025-09-08 00:47:25.841999 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-08 00:47:25.842010 | orchestrator | Monday 08 September 2025 00:44:13 +0000 (0:00:00.937) 0:00:22.476 ****** 2025-09-08 00:47:25.842080 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-08 00:47:25.842099 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-08 00:47:25.842109 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-08 00:47:25.842120 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-08 00:47:25.842131 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-08 00:47:25.842141 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-08 00:47:25.842152 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-08 00:47:25.842163 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-08 00:47:25.842173 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-08 00:47:25.842184 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-08 00:47:25.842194 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-08 00:47:25.842205 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-08 00:47:25.842215 | orchestrator | 2025-09-08 00:47:25.842226 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-08 00:47:25.842237 | orchestrator | Monday 08 September 2025 00:44:15 +0000 (0:00:02.895) 0:00:25.371 ****** 2025-09-08 00:47:25.842248 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.842258 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.842269 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.842280 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.842290 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.842301 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.842311 | orchestrator | 2025-09-08 00:47:25.842331 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-08 00:47:25.842342 | orchestrator | 2025-09-08 00:47:25.842353 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-08 00:47:25.842364 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:01.528) 0:00:26.899 ****** 2025-09-08 00:47:25.842375 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.842386 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.842397 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.842407 | orchestrator | 2025-09-08 00:47:25.842418 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-08 00:47:25.842429 | orchestrator | Monday 08 September 2025 00:44:18 +0000 (0:00:01.088) 0:00:27.988 ****** 2025-09-08 00:47:25.842439 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.842450 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.842461 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.842471 | orchestrator | 2025-09-08 00:47:25.842482 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-08 00:47:25.842493 | orchestrator | Monday 08 September 2025 00:44:19 +0000 (0:00:01.377) 0:00:29.366 ****** 2025-09-08 00:47:25.842504 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.842514 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.842525 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.842535 | orchestrator | 2025-09-08 00:47:25.842546 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-08 00:47:25.842557 | orchestrator | Monday 08 September 2025 00:44:20 +0000 (0:00:00.889) 0:00:30.255 ****** 2025-09-08 00:47:25.842568 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.842578 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.842589 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.842616 | orchestrator | 2025-09-08 00:47:25.842627 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-08 00:47:25.842638 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.892) 0:00:31.148 ****** 2025-09-08 00:47:25.842649 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.842660 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.842670 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.842681 | orchestrator | 2025-09-08 00:47:25.842692 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-08 00:47:25.842710 | orchestrator | Monday 08 September 2025 00:44:22 +0000 (0:00:00.288) 0:00:31.436 ****** 2025-09-08 00:47:25.842721 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.842732 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.842742 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.842753 | orchestrator | 2025-09-08 00:47:25.842764 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-08 00:47:25.842775 | orchestrator | Monday 08 September 2025 00:44:22 +0000 (0:00:00.703) 0:00:32.140 ****** 2025-09-08 00:47:25.842785 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.842802 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.842813 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.842824 | orchestrator | 2025-09-08 00:47:25.842834 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-08 00:47:25.842845 | orchestrator | Monday 08 September 2025 00:44:24 +0000 (0:00:01.319) 0:00:33.460 ****** 2025-09-08 00:47:25.842856 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:47:25.842867 | orchestrator | 2025-09-08 00:47:25.842878 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-08 00:47:25.842889 | orchestrator | Monday 08 September 2025 00:44:24 +0000 (0:00:00.735) 0:00:34.195 ****** 2025-09-08 00:47:25.842899 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.842910 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.842921 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.842932 | orchestrator | 2025-09-08 00:47:25.842942 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-08 00:47:25.842953 | orchestrator | Monday 08 September 2025 00:44:26 +0000 (0:00:01.381) 0:00:35.576 ****** 2025-09-08 00:47:25.842964 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.842975 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.842985 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.842996 | orchestrator | 2025-09-08 00:47:25.843007 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-08 00:47:25.843018 | orchestrator | Monday 08 September 2025 00:44:26 +0000 (0:00:00.618) 0:00:36.194 ****** 2025-09-08 00:47:25.843028 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.843039 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.843050 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.843060 | orchestrator | 2025-09-08 00:47:25.843071 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-08 00:47:25.843082 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:01.050) 0:00:37.245 ****** 2025-09-08 00:47:25.843093 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.843103 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.843114 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.843125 | orchestrator | 2025-09-08 00:47:25.843135 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-08 00:47:25.843146 | orchestrator | Monday 08 September 2025 00:44:29 +0000 (0:00:01.327) 0:00:38.573 ****** 2025-09-08 00:47:25.843157 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.843168 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.843178 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.843189 | orchestrator | 2025-09-08 00:47:25.843200 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-08 00:47:25.843210 | orchestrator | Monday 08 September 2025 00:44:29 +0000 (0:00:00.304) 0:00:38.877 ****** 2025-09-08 00:47:25.843221 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.843232 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.843242 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.843253 | orchestrator | 2025-09-08 00:47:25.843264 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-08 00:47:25.843275 | orchestrator | Monday 08 September 2025 00:44:29 +0000 (0:00:00.309) 0:00:39.187 ****** 2025-09-08 00:47:25.843285 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.843302 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.843313 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.843324 | orchestrator | 2025-09-08 00:47:25.843341 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-08 00:47:25.843352 | orchestrator | Monday 08 September 2025 00:44:31 +0000 (0:00:01.981) 0:00:41.168 ****** 2025-09-08 00:47:25.843364 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-08 00:47:25.843376 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-08 00:47:25.843387 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-08 00:47:25.843398 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-08 00:47:25.843408 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-08 00:47:25.843419 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-08 00:47:25.843430 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-08 00:47:25.843441 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-08 00:47:25.843452 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-08 00:47:25.843463 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-08 00:47:25.843473 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-08 00:47:25.843484 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-08 00:47:25.843495 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.843506 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.843517 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.843528 | orchestrator | 2025-09-08 00:47:25.843538 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-08 00:47:25.843549 | orchestrator | Monday 08 September 2025 00:45:16 +0000 (0:00:44.496) 0:01:25.665 ****** 2025-09-08 00:47:25.843560 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.843571 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.843581 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.843608 | orchestrator | 2025-09-08 00:47:25.843619 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-08 00:47:25.843630 | orchestrator | Monday 08 September 2025 00:45:16 +0000 (0:00:00.330) 0:01:25.995 ****** 2025-09-08 00:47:25.843641 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.843652 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.843662 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.843673 | orchestrator | 2025-09-08 00:47:25.843684 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-08 00:47:25.843694 | orchestrator | Monday 08 September 2025 00:45:17 +0000 (0:00:01.081) 0:01:27.077 ****** 2025-09-08 00:47:25.843705 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.843716 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.843726 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.843737 | orchestrator | 2025-09-08 00:47:25.843748 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-08 00:47:25.843766 | orchestrator | Monday 08 September 2025 00:45:18 +0000 (0:00:01.226) 0:01:28.304 ****** 2025-09-08 00:47:25.843776 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.843787 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.843798 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.843808 | orchestrator | 2025-09-08 00:47:25.843819 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-08 00:47:25.843838 | orchestrator | Monday 08 September 2025 00:45:43 +0000 (0:00:24.778) 0:01:53.083 ****** 2025-09-08 00:47:25.843849 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.843860 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.843870 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.843881 | orchestrator | 2025-09-08 00:47:25.843892 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-08 00:47:25.843902 | orchestrator | Monday 08 September 2025 00:45:44 +0000 (0:00:00.789) 0:01:53.873 ****** 2025-09-08 00:47:25.843913 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.843924 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.843934 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.843945 | orchestrator | 2025-09-08 00:47:25.843956 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-08 00:47:25.843967 | orchestrator | Monday 08 September 2025 00:45:45 +0000 (0:00:00.682) 0:01:54.556 ****** 2025-09-08 00:47:25.843977 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.843988 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.843999 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.844009 | orchestrator | 2025-09-08 00:47:25.844020 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-08 00:47:25.844031 | orchestrator | Monday 08 September 2025 00:45:45 +0000 (0:00:00.629) 0:01:55.185 ****** 2025-09-08 00:47:25.844041 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.844058 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.844069 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.844080 | orchestrator | 2025-09-08 00:47:25.844091 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-08 00:47:25.844101 | orchestrator | Monday 08 September 2025 00:45:46 +0000 (0:00:00.871) 0:01:56.057 ****** 2025-09-08 00:47:25.844112 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.844123 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.844133 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.844144 | orchestrator | 2025-09-08 00:47:25.844155 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-08 00:47:25.844166 | orchestrator | Monday 08 September 2025 00:45:47 +0000 (0:00:00.506) 0:01:56.564 ****** 2025-09-08 00:47:25.844176 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.844187 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.844198 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.844208 | orchestrator | 2025-09-08 00:47:25.844219 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-08 00:47:25.844230 | orchestrator | Monday 08 September 2025 00:45:48 +0000 (0:00:00.864) 0:01:57.429 ****** 2025-09-08 00:47:25.844240 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.844251 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.844262 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.844272 | orchestrator | 2025-09-08 00:47:25.844283 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-08 00:47:25.844294 | orchestrator | Monday 08 September 2025 00:45:48 +0000 (0:00:00.833) 0:01:58.262 ****** 2025-09-08 00:47:25.844304 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.844315 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.844326 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.844336 | orchestrator | 2025-09-08 00:47:25.844347 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-08 00:47:25.844358 | orchestrator | Monday 08 September 2025 00:45:50 +0000 (0:00:01.419) 0:01:59.683 ****** 2025-09-08 00:47:25.844377 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:25.844388 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:25.844398 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:25.844409 | orchestrator | 2025-09-08 00:47:25.844419 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-08 00:47:25.844430 | orchestrator | Monday 08 September 2025 00:45:51 +0000 (0:00:01.029) 0:02:00.712 ****** 2025-09-08 00:47:25.844441 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.844451 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.844462 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.844473 | orchestrator | 2025-09-08 00:47:25.844483 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-08 00:47:25.844494 | orchestrator | Monday 08 September 2025 00:45:51 +0000 (0:00:00.486) 0:02:01.199 ****** 2025-09-08 00:47:25.844510 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.844521 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.844532 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.844542 | orchestrator | 2025-09-08 00:47:25.844553 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-08 00:47:25.844564 | orchestrator | Monday 08 September 2025 00:45:52 +0000 (0:00:00.475) 0:02:01.674 ****** 2025-09-08 00:47:25.844574 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.844585 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.844624 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.844635 | orchestrator | 2025-09-08 00:47:25.844646 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-08 00:47:25.844657 | orchestrator | Monday 08 September 2025 00:45:53 +0000 (0:00:01.264) 0:02:02.939 ****** 2025-09-08 00:47:25.844667 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.844678 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.844689 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.844699 | orchestrator | 2025-09-08 00:47:25.844710 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-08 00:47:25.844721 | orchestrator | Monday 08 September 2025 00:45:54 +0000 (0:00:00.933) 0:02:03.872 ****** 2025-09-08 00:47:25.844731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-08 00:47:25.844742 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-08 00:47:25.844753 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-08 00:47:25.844764 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-08 00:47:25.844774 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-08 00:47:25.844785 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-08 00:47:25.844795 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-08 00:47:25.844806 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-08 00:47:25.844817 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-08 00:47:25.844827 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-08 00:47:25.844838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-08 00:47:25.844849 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-08 00:47:25.844859 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-08 00:47:25.844875 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-08 00:47:25.844894 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-08 00:47:25.844905 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-08 00:47:25.844915 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-08 00:47:25.844926 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-08 00:47:25.844937 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-08 00:47:25.844947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-08 00:47:25.844958 | orchestrator | 2025-09-08 00:47:25.844969 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-08 00:47:25.844979 | orchestrator | 2025-09-08 00:47:25.844990 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-08 00:47:25.845001 | orchestrator | Monday 08 September 2025 00:45:57 +0000 (0:00:03.219) 0:02:07.091 ****** 2025-09-08 00:47:25.845012 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.845022 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.845033 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.845044 | orchestrator | 2025-09-08 00:47:25.845055 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-08 00:47:25.845065 | orchestrator | Monday 08 September 2025 00:45:58 +0000 (0:00:00.553) 0:02:07.645 ****** 2025-09-08 00:47:25.845076 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.845087 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.845097 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.845108 | orchestrator | 2025-09-08 00:47:25.845119 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-08 00:47:25.845129 | orchestrator | Monday 08 September 2025 00:45:58 +0000 (0:00:00.634) 0:02:08.280 ****** 2025-09-08 00:47:25.845140 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.845151 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.845161 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.845172 | orchestrator | 2025-09-08 00:47:25.845182 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-08 00:47:25.845193 | orchestrator | Monday 08 September 2025 00:45:59 +0000 (0:00:00.318) 0:02:08.598 ****** 2025-09-08 00:47:25.845204 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:47:25.845215 | orchestrator | 2025-09-08 00:47:25.845226 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-08 00:47:25.845242 | orchestrator | Monday 08 September 2025 00:45:59 +0000 (0:00:00.673) 0:02:09.272 ****** 2025-09-08 00:47:25.845252 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.845263 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.845274 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.845285 | orchestrator | 2025-09-08 00:47:25.845295 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-08 00:47:25.845306 | orchestrator | Monday 08 September 2025 00:46:00 +0000 (0:00:00.315) 0:02:09.587 ****** 2025-09-08 00:47:25.845317 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.845327 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.845338 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.845349 | orchestrator | 2025-09-08 00:47:25.845359 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-08 00:47:25.845370 | orchestrator | Monday 08 September 2025 00:46:00 +0000 (0:00:00.294) 0:02:09.882 ****** 2025-09-08 00:47:25.845380 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.845391 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.845402 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.845412 | orchestrator | 2025-09-08 00:47:25.845423 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-08 00:47:25.845446 | orchestrator | Monday 08 September 2025 00:46:00 +0000 (0:00:00.346) 0:02:10.228 ****** 2025-09-08 00:47:25.845457 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.845468 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.845478 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.845489 | orchestrator | 2025-09-08 00:47:25.845500 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-08 00:47:25.845511 | orchestrator | Monday 08 September 2025 00:46:01 +0000 (0:00:00.863) 0:02:11.092 ****** 2025-09-08 00:47:25.845521 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.845532 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.845543 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.845553 | orchestrator | 2025-09-08 00:47:25.845564 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-08 00:47:25.845574 | orchestrator | Monday 08 September 2025 00:46:02 +0000 (0:00:01.195) 0:02:12.288 ****** 2025-09-08 00:47:25.845585 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.845610 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.845620 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.845631 | orchestrator | 2025-09-08 00:47:25.845642 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-08 00:47:25.845652 | orchestrator | Monday 08 September 2025 00:46:04 +0000 (0:00:01.320) 0:02:13.608 ****** 2025-09-08 00:47:25.845663 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:25.845674 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:25.845684 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:25.845695 | orchestrator | 2025-09-08 00:47:25.845705 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-08 00:47:25.845716 | orchestrator | 2025-09-08 00:47:25.845727 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-08 00:47:25.845737 | orchestrator | Monday 08 September 2025 00:46:17 +0000 (0:00:13.494) 0:02:27.103 ****** 2025-09-08 00:47:25.845748 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:25.845759 | orchestrator | 2025-09-08 00:47:25.845770 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-08 00:47:25.845780 | orchestrator | Monday 08 September 2025 00:46:18 +0000 (0:00:00.717) 0:02:27.820 ****** 2025-09-08 00:47:25.845796 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.845808 | orchestrator | 2025-09-08 00:47:25.845818 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-08 00:47:25.845829 | orchestrator | Monday 08 September 2025 00:46:18 +0000 (0:00:00.424) 0:02:28.245 ****** 2025-09-08 00:47:25.845840 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-08 00:47:25.845851 | orchestrator | 2025-09-08 00:47:25.845862 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-08 00:47:25.845873 | orchestrator | Monday 08 September 2025 00:46:19 +0000 (0:00:00.550) 0:02:28.795 ****** 2025-09-08 00:47:25.845883 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.845894 | orchestrator | 2025-09-08 00:47:25.845905 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-08 00:47:25.845916 | orchestrator | Monday 08 September 2025 00:46:20 +0000 (0:00:00.880) 0:02:29.675 ****** 2025-09-08 00:47:25.845926 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.845937 | orchestrator | 2025-09-08 00:47:25.845948 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-08 00:47:25.845958 | orchestrator | Monday 08 September 2025 00:46:20 +0000 (0:00:00.610) 0:02:30.286 ****** 2025-09-08 00:47:25.845969 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:47:25.845979 | orchestrator | 2025-09-08 00:47:25.845990 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-08 00:47:25.846001 | orchestrator | Monday 08 September 2025 00:46:22 +0000 (0:00:01.598) 0:02:31.884 ****** 2025-09-08 00:47:25.846011 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:47:25.846057 | orchestrator | 2025-09-08 00:47:25.846068 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-08 00:47:25.846078 | orchestrator | Monday 08 September 2025 00:46:23 +0000 (0:00:00.875) 0:02:32.759 ****** 2025-09-08 00:47:25.846089 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.846100 | orchestrator | 2025-09-08 00:47:25.846110 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-08 00:47:25.846121 | orchestrator | Monday 08 September 2025 00:46:23 +0000 (0:00:00.420) 0:02:33.180 ****** 2025-09-08 00:47:25.846132 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.846142 | orchestrator | 2025-09-08 00:47:25.846153 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-08 00:47:25.846164 | orchestrator | 2025-09-08 00:47:25.846175 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-08 00:47:25.846186 | orchestrator | Monday 08 September 2025 00:46:24 +0000 (0:00:00.718) 0:02:33.898 ****** 2025-09-08 00:47:25.846196 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:25.846207 | orchestrator | 2025-09-08 00:47:25.846218 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-08 00:47:25.846234 | orchestrator | Monday 08 September 2025 00:46:24 +0000 (0:00:00.148) 0:02:34.047 ****** 2025-09-08 00:47:25.846245 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:47:25.846256 | orchestrator | 2025-09-08 00:47:25.846267 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-08 00:47:25.846277 | orchestrator | Monday 08 September 2025 00:46:24 +0000 (0:00:00.299) 0:02:34.346 ****** 2025-09-08 00:47:25.846288 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:25.846299 | orchestrator | 2025-09-08 00:47:25.846310 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-08 00:47:25.846320 | orchestrator | Monday 08 September 2025 00:46:25 +0000 (0:00:00.999) 0:02:35.346 ****** 2025-09-08 00:47:25.846331 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:25.846342 | orchestrator | 2025-09-08 00:47:25.846352 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-08 00:47:25.846363 | orchestrator | Monday 08 September 2025 00:46:27 +0000 (0:00:01.929) 0:02:37.275 ****** 2025-09-08 00:47:25.846374 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.846385 | orchestrator | 2025-09-08 00:47:25.846395 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-08 00:47:25.846406 | orchestrator | Monday 08 September 2025 00:46:28 +0000 (0:00:00.937) 0:02:38.212 ****** 2025-09-08 00:47:25.846417 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:25.846427 | orchestrator | 2025-09-08 00:47:25.846438 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-08 00:47:25.846449 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.502) 0:02:38.715 ****** 2025-09-08 00:47:25.846459 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.846470 | orchestrator | 2025-09-08 00:47:25.846481 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-08 00:47:25.846492 | orchestrator | Monday 08 September 2025 00:46:36 +0000 (0:00:07.148) 0:02:45.863 ****** 2025-09-08 00:47:25.846502 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.846513 | orchestrator | 2025-09-08 00:47:25.846524 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-08 00:47:25.846534 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:16.506) 0:03:02.370 ****** 2025-09-08 00:47:25.846545 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:25.846556 | orchestrator | 2025-09-08 00:47:25.846566 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-08 00:47:25.846577 | orchestrator | 2025-09-08 00:47:25.846588 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-08 00:47:25.846650 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.465) 0:03:02.835 ****** 2025-09-08 00:47:25.846661 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.846679 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.846690 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.846701 | orchestrator | 2025-09-08 00:47:25.846710 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-08 00:47:25.846720 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.327) 0:03:03.163 ****** 2025-09-08 00:47:25.846729 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.846739 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.846749 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.846758 | orchestrator | 2025-09-08 00:47:25.846773 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-08 00:47:25.846783 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.327) 0:03:03.490 ****** 2025-09-08 00:47:25.846793 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:47:25.846803 | orchestrator | 2025-09-08 00:47:25.846812 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-08 00:47:25.846822 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.696) 0:03:04.187 ****** 2025-09-08 00:47:25.846831 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.846841 | orchestrator | 2025-09-08 00:47:25.846851 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-08 00:47:25.846860 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.189) 0:03:04.377 ****** 2025-09-08 00:47:25.846870 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.846879 | orchestrator | 2025-09-08 00:47:25.846889 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-08 00:47:25.846898 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.270) 0:03:04.647 ****** 2025-09-08 00:47:25.846908 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.846917 | orchestrator | 2025-09-08 00:47:25.846927 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-08 00:47:25.846937 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.205) 0:03:04.853 ****** 2025-09-08 00:47:25.846946 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.846956 | orchestrator | 2025-09-08 00:47:25.846965 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-08 00:47:25.846975 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.329) 0:03:05.182 ****** 2025-09-08 00:47:25.846984 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.846994 | orchestrator | 2025-09-08 00:47:25.847004 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-08 00:47:25.847013 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:00.326) 0:03:05.508 ****** 2025-09-08 00:47:25.847022 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847032 | orchestrator | 2025-09-08 00:47:25.847041 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-08 00:47:25.847051 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:00.234) 0:03:05.742 ****** 2025-09-08 00:47:25.847070 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847081 | orchestrator | 2025-09-08 00:47:25.847090 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-08 00:47:25.847100 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:00.181) 0:03:05.924 ****** 2025-09-08 00:47:25.847109 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847119 | orchestrator | 2025-09-08 00:47:25.847133 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-08 00:47:25.847143 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:00.201) 0:03:06.125 ****** 2025-09-08 00:47:25.847153 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847162 | orchestrator | 2025-09-08 00:47:25.847171 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-08 00:47:25.847181 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:00.195) 0:03:06.321 ****** 2025-09-08 00:47:25.847198 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-08 00:47:25.847208 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-08 00:47:25.847217 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847227 | orchestrator | 2025-09-08 00:47:25.847236 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-08 00:47:25.847246 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:00.651) 0:03:06.972 ****** 2025-09-08 00:47:25.847255 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847265 | orchestrator | 2025-09-08 00:47:25.847274 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-08 00:47:25.847284 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:00.197) 0:03:07.170 ****** 2025-09-08 00:47:25.847293 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847302 | orchestrator | 2025-09-08 00:47:25.847312 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-08 00:47:25.847321 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:00.202) 0:03:07.372 ****** 2025-09-08 00:47:25.847331 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847340 | orchestrator | 2025-09-08 00:47:25.847349 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-08 00:47:25.847359 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.203) 0:03:07.575 ****** 2025-09-08 00:47:25.847368 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847378 | orchestrator | 2025-09-08 00:47:25.847387 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-08 00:47:25.847397 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.224) 0:03:07.800 ****** 2025-09-08 00:47:25.847406 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847416 | orchestrator | 2025-09-08 00:47:25.847425 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-08 00:47:25.847435 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.190) 0:03:07.990 ****** 2025-09-08 00:47:25.847444 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847453 | orchestrator | 2025-09-08 00:47:25.847463 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-08 00:47:25.847472 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.202) 0:03:08.193 ****** 2025-09-08 00:47:25.847482 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847491 | orchestrator | 2025-09-08 00:47:25.847501 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-08 00:47:25.847510 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.201) 0:03:08.394 ****** 2025-09-08 00:47:25.847520 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847529 | orchestrator | 2025-09-08 00:47:25.847539 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-08 00:47:25.847553 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.170) 0:03:08.564 ****** 2025-09-08 00:47:25.847563 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847573 | orchestrator | 2025-09-08 00:47:25.847582 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-08 00:47:25.847609 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.202) 0:03:08.767 ****** 2025-09-08 00:47:25.847619 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847629 | orchestrator | 2025-09-08 00:47:25.847638 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-08 00:47:25.847648 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.184) 0:03:08.951 ****** 2025-09-08 00:47:25.847658 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847667 | orchestrator | 2025-09-08 00:47:25.847677 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-08 00:47:25.847686 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.208) 0:03:09.159 ****** 2025-09-08 00:47:25.847696 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-08 00:47:25.847717 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-08 00:47:25.847727 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-08 00:47:25.847737 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-08 00:47:25.847746 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847756 | orchestrator | 2025-09-08 00:47:25.847765 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-08 00:47:25.847775 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:00.769) 0:03:09.929 ****** 2025-09-08 00:47:25.847784 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847794 | orchestrator | 2025-09-08 00:47:25.847804 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-08 00:47:25.847813 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:00.222) 0:03:10.151 ****** 2025-09-08 00:47:25.847823 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847832 | orchestrator | 2025-09-08 00:47:25.847842 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-08 00:47:25.847851 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:00.225) 0:03:10.377 ****** 2025-09-08 00:47:25.847861 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847870 | orchestrator | 2025-09-08 00:47:25.847880 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-08 00:47:25.847889 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.211) 0:03:10.589 ****** 2025-09-08 00:47:25.847899 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847909 | orchestrator | 2025-09-08 00:47:25.847923 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-08 00:47:25.847933 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.201) 0:03:10.790 ****** 2025-09-08 00:47:25.847942 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-08 00:47:25.847952 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-08 00:47:25.847961 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.847971 | orchestrator | 2025-09-08 00:47:25.847980 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-08 00:47:25.847990 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.372) 0:03:11.163 ****** 2025-09-08 00:47:25.847999 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.848009 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.848019 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.848028 | orchestrator | 2025-09-08 00:47:25.848038 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-08 00:47:25.848047 | orchestrator | Monday 08 September 2025 00:47:02 +0000 (0:00:00.429) 0:03:11.593 ****** 2025-09-08 00:47:25.848057 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.848067 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.848076 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.848086 | orchestrator | 2025-09-08 00:47:25.848096 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-08 00:47:25.848105 | orchestrator | 2025-09-08 00:47:25.848115 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-08 00:47:25.848124 | orchestrator | Monday 08 September 2025 00:47:03 +0000 (0:00:01.612) 0:03:13.205 ****** 2025-09-08 00:47:25.848134 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:25.848143 | orchestrator | 2025-09-08 00:47:25.848153 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-08 00:47:25.848162 | orchestrator | Monday 08 September 2025 00:47:03 +0000 (0:00:00.183) 0:03:13.389 ****** 2025-09-08 00:47:25.848172 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:47:25.848182 | orchestrator | 2025-09-08 00:47:25.848191 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-08 00:47:25.848201 | orchestrator | Monday 08 September 2025 00:47:04 +0000 (0:00:00.244) 0:03:13.633 ****** 2025-09-08 00:47:25.848217 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:25.848226 | orchestrator | 2025-09-08 00:47:25.848236 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-08 00:47:25.848245 | orchestrator | 2025-09-08 00:47:25.848255 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-08 00:47:25.848264 | orchestrator | Monday 08 September 2025 00:47:10 +0000 (0:00:06.438) 0:03:20.072 ****** 2025-09-08 00:47:25.848274 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:25.848284 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:25.848293 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:25.848303 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:25.848312 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:25.848322 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:25.848331 | orchestrator | 2025-09-08 00:47:25.848341 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-08 00:47:25.848351 | orchestrator | Monday 08 September 2025 00:47:11 +0000 (0:00:00.539) 0:03:20.611 ****** 2025-09-08 00:47:25.848365 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-08 00:47:25.848375 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-08 00:47:25.848385 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-08 00:47:25.848394 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-08 00:47:25.848404 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-08 00:47:25.848413 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-08 00:47:25.848423 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-08 00:47:25.848432 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-08 00:47:25.848442 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-08 00:47:25.848451 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-08 00:47:25.848461 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-08 00:47:25.848470 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-08 00:47:25.848480 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-08 00:47:25.848490 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-08 00:47:25.848499 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-08 00:47:25.848509 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-08 00:47:25.848518 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-08 00:47:25.848528 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-08 00:47:25.848537 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-08 00:47:25.848547 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-08 00:47:25.848561 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-08 00:47:25.848571 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-08 00:47:25.848580 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-08 00:47:25.848590 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-08 00:47:25.848640 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-08 00:47:25.848657 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-08 00:47:25.848666 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-08 00:47:25.848676 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-08 00:47:25.848686 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-08 00:47:25.848695 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-08 00:47:25.848705 | orchestrator | 2025-09-08 00:47:25.848714 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-08 00:47:25.848724 | orchestrator | Monday 08 September 2025 00:47:23 +0000 (0:00:12.004) 0:03:32.615 ****** 2025-09-08 00:47:25.848733 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.848743 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.848752 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.848762 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.848772 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.848781 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.848791 | orchestrator | 2025-09-08 00:47:25.848800 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-08 00:47:25.848810 | orchestrator | Monday 08 September 2025 00:47:23 +0000 (0:00:00.567) 0:03:33.183 ****** 2025-09-08 00:47:25.848819 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:25.848829 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:25.848838 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:25.848848 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:25.848857 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:25.848867 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:25.848876 | orchestrator | 2025-09-08 00:47:25.848886 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:47:25.848896 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:47:25.848906 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-08 00:47:25.848917 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-08 00:47:25.848932 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-08 00:47:25.848942 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-08 00:47:25.848952 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-08 00:47:25.848961 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-08 00:47:25.848971 | orchestrator | 2025-09-08 00:47:25.848980 | orchestrator | 2025-09-08 00:47:25.848990 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:47:25.849000 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:00.353) 0:03:33.536 ****** 2025-09-08 00:47:25.849009 | orchestrator | =============================================================================== 2025-09-08 00:47:25.849019 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.50s 2025-09-08 00:47:25.849029 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.78s 2025-09-08 00:47:25.849038 | orchestrator | kubectl : Install required packages ------------------------------------ 16.51s 2025-09-08 00:47:25.849058 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 13.49s 2025-09-08 00:47:25.849067 | orchestrator | Manage labels ---------------------------------------------------------- 12.00s 2025-09-08 00:47:25.849077 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.15s 2025-09-08 00:47:25.849087 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.44s 2025-09-08 00:47:25.849096 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.51s 2025-09-08 00:47:25.849106 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.22s 2025-09-08 00:47:25.849116 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.90s 2025-09-08 00:47:25.849124 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.17s 2025-09-08 00:47:25.849136 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.12s 2025-09-08 00:47:25.849144 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.98s 2025-09-08 00:47:25.849152 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.93s 2025-09-08 00:47:25.849160 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.65s 2025-09-08 00:47:25.849167 | orchestrator | k3s_server_post : Remove tmp directory used for manifests --------------- 1.61s 2025-09-08 00:47:25.849175 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.60s 2025-09-08 00:47:25.849183 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.53s 2025-09-08 00:47:25.849191 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.50s 2025-09-08 00:47:25.849199 | orchestrator | k3s_server : Copy config file to user home directory -------------------- 1.42s 2025-09-08 00:47:28.890828 | orchestrator | 2025-09-08 00:47:28 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:28.893459 | orchestrator | 2025-09-08 00:47:28 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:28.893851 | orchestrator | 2025-09-08 00:47:28 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:28.895739 | orchestrator | 2025-09-08 00:47:28 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:28.895982 | orchestrator | 2025-09-08 00:47:28 | INFO  | Task 0b8dcaa0-c70f-4c00-ad39-53633c045b64 is in state STARTED 2025-09-08 00:47:28.896776 | orchestrator | 2025-09-08 00:47:28 | INFO  | Task 06ece321-4971-498a-875b-238099645e2c is in state STARTED 2025-09-08 00:47:28.897008 | orchestrator | 2025-09-08 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:32.025079 | orchestrator | 2025-09-08 00:47:32 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:32.025177 | orchestrator | 2025-09-08 00:47:32 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:32.025190 | orchestrator | 2025-09-08 00:47:32 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:32.025201 | orchestrator | 2025-09-08 00:47:32 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:32.025212 | orchestrator | 2025-09-08 00:47:32 | INFO  | Task 0b8dcaa0-c70f-4c00-ad39-53633c045b64 is in state STARTED 2025-09-08 00:47:32.025223 | orchestrator | 2025-09-08 00:47:32 | INFO  | Task 06ece321-4971-498a-875b-238099645e2c is in state STARTED 2025-09-08 00:47:32.025234 | orchestrator | 2025-09-08 00:47:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:35.157119 | orchestrator | 2025-09-08 00:47:35 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:35.157261 | orchestrator | 2025-09-08 00:47:35 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:35.157276 | orchestrator | 2025-09-08 00:47:35 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:35.159645 | orchestrator | 2025-09-08 00:47:35 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:35.160442 | orchestrator | 2025-09-08 00:47:35 | INFO  | Task 0b8dcaa0-c70f-4c00-ad39-53633c045b64 is in state STARTED 2025-09-08 00:47:35.160842 | orchestrator | 2025-09-08 00:47:35 | INFO  | Task 06ece321-4971-498a-875b-238099645e2c is in state SUCCESS 2025-09-08 00:47:35.160871 | orchestrator | 2025-09-08 00:47:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:38.191946 | orchestrator | 2025-09-08 00:47:38 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:38.193153 | orchestrator | 2025-09-08 00:47:38 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:38.194963 | orchestrator | 2025-09-08 00:47:38 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:38.196218 | orchestrator | 2025-09-08 00:47:38 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:38.197081 | orchestrator | 2025-09-08 00:47:38 | INFO  | Task 0b8dcaa0-c70f-4c00-ad39-53633c045b64 is in state SUCCESS 2025-09-08 00:47:38.197218 | orchestrator | 2025-09-08 00:47:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:41.237383 | orchestrator | 2025-09-08 00:47:41 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:41.238589 | orchestrator | 2025-09-08 00:47:41 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:41.241621 | orchestrator | 2025-09-08 00:47:41 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:41.242449 | orchestrator | 2025-09-08 00:47:41 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state STARTED 2025-09-08 00:47:41.242474 | orchestrator | 2025-09-08 00:47:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:44.282672 | orchestrator | 2025-09-08 00:47:44 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:44.283457 | orchestrator | 2025-09-08 00:47:44 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:47:44.285134 | orchestrator | 2025-09-08 00:47:44 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:44.290089 | orchestrator | 2025-09-08 00:47:44 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:44.293141 | orchestrator | 2025-09-08 00:47:44 | INFO  | Task 18cff402-91d6-4d5c-9628-fcad9e1be8f3 is in state SUCCESS 2025-09-08 00:47:44.293166 | orchestrator | 2025-09-08 00:47:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:44.295546 | orchestrator | 2025-09-08 00:47:44.295589 | orchestrator | 2025-09-08 00:47:44.295645 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-08 00:47:44.295656 | orchestrator | 2025-09-08 00:47:44.295668 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-08 00:47:44.295679 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:00.188) 0:00:00.188 ****** 2025-09-08 00:47:44.295691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-08 00:47:44.295702 | orchestrator | 2025-09-08 00:47:44.295713 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-08 00:47:44.295724 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:00.897) 0:00:01.085 ****** 2025-09-08 00:47:44.295754 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:44.295765 | orchestrator | 2025-09-08 00:47:44.295776 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-08 00:47:44.295789 | orchestrator | Monday 08 September 2025 00:47:31 +0000 (0:00:01.360) 0:00:02.446 ****** 2025-09-08 00:47:44.295800 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:44.295810 | orchestrator | 2025-09-08 00:47:44.295821 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:47:44.295832 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:47:44.295845 | orchestrator | 2025-09-08 00:47:44.295856 | orchestrator | 2025-09-08 00:47:44.295867 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:47:44.295880 | orchestrator | Monday 08 September 2025 00:47:31 +0000 (0:00:00.605) 0:00:03.051 ****** 2025-09-08 00:47:44.295898 | orchestrator | =============================================================================== 2025-09-08 00:47:44.295916 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.36s 2025-09-08 00:47:44.295934 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.90s 2025-09-08 00:47:44.295951 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.61s 2025-09-08 00:47:44.295970 | orchestrator | 2025-09-08 00:47:44.295988 | orchestrator | 2025-09-08 00:47:44.296006 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-08 00:47:44.296017 | orchestrator | 2025-09-08 00:47:44.296028 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-08 00:47:44.296039 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:00.158) 0:00:00.158 ****** 2025-09-08 00:47:44.296049 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:44.296061 | orchestrator | 2025-09-08 00:47:44.296072 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-08 00:47:44.296083 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:00.640) 0:00:00.799 ****** 2025-09-08 00:47:44.296093 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:44.296104 | orchestrator | 2025-09-08 00:47:44.296115 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-08 00:47:44.296127 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:00.558) 0:00:01.357 ****** 2025-09-08 00:47:44.296140 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-08 00:47:44.296153 | orchestrator | 2025-09-08 00:47:44.296165 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-08 00:47:44.296178 | orchestrator | Monday 08 September 2025 00:47:30 +0000 (0:00:00.688) 0:00:02.046 ****** 2025-09-08 00:47:44.296190 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:44.296204 | orchestrator | 2025-09-08 00:47:44.296216 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-08 00:47:44.296228 | orchestrator | Monday 08 September 2025 00:47:31 +0000 (0:00:01.369) 0:00:03.415 ****** 2025-09-08 00:47:44.296240 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:44.296252 | orchestrator | 2025-09-08 00:47:44.296265 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-08 00:47:44.296277 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:00.737) 0:00:04.153 ****** 2025-09-08 00:47:44.296289 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:47:44.296301 | orchestrator | 2025-09-08 00:47:44.296314 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-08 00:47:44.296327 | orchestrator | Monday 08 September 2025 00:47:34 +0000 (0:00:01.521) 0:00:05.674 ****** 2025-09-08 00:47:44.296339 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:47:44.296352 | orchestrator | 2025-09-08 00:47:44.296365 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-08 00:47:44.296383 | orchestrator | Monday 08 September 2025 00:47:34 +0000 (0:00:00.743) 0:00:06.418 ****** 2025-09-08 00:47:44.296399 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:44.296410 | orchestrator | 2025-09-08 00:47:44.296421 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-08 00:47:44.296432 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.399) 0:00:06.817 ****** 2025-09-08 00:47:44.296442 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:44.296453 | orchestrator | 2025-09-08 00:47:44.296464 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:47:44.296475 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:47:44.296486 | orchestrator | 2025-09-08 00:47:44.296496 | orchestrator | 2025-09-08 00:47:44.296507 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:47:44.296518 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.254) 0:00:07.072 ****** 2025-09-08 00:47:44.296528 | orchestrator | =============================================================================== 2025-09-08 00:47:44.296539 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-09-08 00:47:44.296550 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.37s 2025-09-08 00:47:44.296561 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2025-09-08 00:47:44.296585 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.74s 2025-09-08 00:47:44.296623 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-09-08 00:47:44.296634 | orchestrator | Get home directory of operator user ------------------------------------- 0.64s 2025-09-08 00:47:44.296645 | orchestrator | Create .kube directory -------------------------------------------------- 0.56s 2025-09-08 00:47:44.296656 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2025-09-08 00:47:44.296666 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-09-08 00:47:44.296677 | orchestrator | 2025-09-08 00:47:44.296688 | orchestrator | 2025-09-08 00:47:44.296699 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:47:44.296709 | orchestrator | 2025-09-08 00:47:44.296720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:47:44.296731 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.366) 0:00:00.366 ****** 2025-09-08 00:47:44.296742 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:44.296753 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:44.296763 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:44.296774 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:44.296785 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:44.296796 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:44.296806 | orchestrator | 2025-09-08 00:47:44.296817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:47:44.296828 | orchestrator | Monday 08 September 2025 00:46:30 +0000 (0:00:00.826) 0:00:01.193 ****** 2025-09-08 00:47:44.296839 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:44.296850 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:44.296861 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:44.296871 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:44.296882 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:44.296893 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:44.296904 | orchestrator | 2025-09-08 00:47:44.296914 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-08 00:47:44.296925 | orchestrator | 2025-09-08 00:47:44.296936 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-08 00:47:44.296958 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:00:00.972) 0:00:02.165 ****** 2025-09-08 00:47:44.296971 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:47:44.296983 | orchestrator | 2025-09-08 00:47:44.296994 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-08 00:47:44.297004 | orchestrator | Monday 08 September 2025 00:46:33 +0000 (0:00:02.164) 0:00:04.329 ****** 2025-09-08 00:47:44.297015 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-08 00:47:44.297026 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-08 00:47:44.297037 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-08 00:47:44.297048 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-08 00:47:44.297059 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-08 00:47:44.297069 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-08 00:47:44.297080 | orchestrator | 2025-09-08 00:47:44.297091 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-08 00:47:44.297101 | orchestrator | Monday 08 September 2025 00:46:35 +0000 (0:00:02.026) 0:00:06.356 ****** 2025-09-08 00:47:44.297112 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-08 00:47:44.297123 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-08 00:47:44.297133 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-08 00:47:44.297144 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-08 00:47:44.297155 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-08 00:47:44.297166 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-08 00:47:44.297176 | orchestrator | 2025-09-08 00:47:44.297187 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-08 00:47:44.297203 | orchestrator | Monday 08 September 2025 00:46:37 +0000 (0:00:02.102) 0:00:08.459 ****** 2025-09-08 00:47:44.297214 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-08 00:47:44.297224 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:44.297235 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-08 00:47:44.297245 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:44.297256 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-08 00:47:44.297267 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:44.297278 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-08 00:47:44.297288 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:44.297299 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-08 00:47:44.297310 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:44.297320 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-08 00:47:44.297331 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:44.297342 | orchestrator | 2025-09-08 00:47:44.297353 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-08 00:47:44.297363 | orchestrator | Monday 08 September 2025 00:46:39 +0000 (0:00:02.228) 0:00:10.687 ****** 2025-09-08 00:47:44.297374 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:44.297385 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:44.297396 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:44.297412 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:44.297424 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:44.297434 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:44.297445 | orchestrator | 2025-09-08 00:47:44.297456 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-08 00:47:44.297467 | orchestrator | Monday 08 September 2025 00:46:40 +0000 (0:00:01.280) 0:00:11.968 ****** 2025-09-08 00:47:44.297480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.297504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.297517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.297528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.297540 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.297567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.297586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298626 | orchestrator | 2025-09-08 00:47:44.298635 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-08 00:47:44.298644 | orchestrator | Monday 08 September 2025 00:46:44 +0000 (0:00:03.274) 0:00:15.242 ****** 2025-09-08 00:47:44.298655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298765 | orchestrator | 2025-09-08 00:47:44.298772 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-08 00:47:44.298778 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:04.328) 0:00:19.570 ****** 2025-09-08 00:47:44.298785 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:44.298792 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:44.298799 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:44.298805 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:44.298811 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:44.298817 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:44.298823 | orchestrator | 2025-09-08 00:47:44.298829 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-08 00:47:44.298835 | orchestrator | Monday 08 September 2025 00:46:50 +0000 (0:00:01.696) 0:00:21.267 ****** 2025-09-08 00:47:44.298841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:44.298947 | orchestrator | 2025-09-08 00:47:44.298954 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:44.298960 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:02.044) 0:00:23.311 ****** 2025-09-08 00:47:44.298966 | orchestrator | 2025-09-08 00:47:44.298972 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:44.298979 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.287) 0:00:23.598 ****** 2025-09-08 00:47:44.298985 | orchestrator | 2025-09-08 00:47:44.298991 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:44.298997 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.190) 0:00:23.788 ****** 2025-09-08 00:47:44.299003 | orchestrator | 2025-09-08 00:47:44.299009 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:44.299016 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.164) 0:00:23.953 ****** 2025-09-08 00:47:44.299022 | orchestrator | 2025-09-08 00:47:44.299028 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:44.299034 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.134) 0:00:24.087 ****** 2025-09-08 00:47:44.299040 | orchestrator | 2025-09-08 00:47:44.299050 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:44.299056 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.130) 0:00:24.218 ****** 2025-09-08 00:47:44.299062 | orchestrator | 2025-09-08 00:47:44.299068 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-08 00:47:44.299074 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.145) 0:00:24.364 ****** 2025-09-08 00:47:44.299081 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:44.299087 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:44.299093 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:44.299099 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:44.299106 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:44.299112 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:44.299118 | orchestrator | 2025-09-08 00:47:44.299125 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-08 00:47:44.299132 | orchestrator | Monday 08 September 2025 00:47:03 +0000 (0:00:10.113) 0:00:34.477 ****** 2025-09-08 00:47:44.299138 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:44.299145 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:44.299151 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:44.299157 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:44.299163 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:44.299170 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:44.299176 | orchestrator | 2025-09-08 00:47:44.299182 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-08 00:47:44.299188 | orchestrator | Monday 08 September 2025 00:47:05 +0000 (0:00:01.860) 0:00:36.338 ****** 2025-09-08 00:47:44.299194 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:44.299200 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:44.299206 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:44.299213 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:44.299219 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:44.299225 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:44.299231 | orchestrator | 2025-09-08 00:47:44.299237 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-08 00:47:44.299243 | orchestrator | Monday 08 September 2025 00:47:16 +0000 (0:00:11.366) 0:00:47.704 ****** 2025-09-08 00:47:44.299250 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-08 00:47:44.299260 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-08 00:47:44.299267 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-08 00:47:44.299273 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-08 00:47:44.299279 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-08 00:47:44.299285 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-08 00:47:44.299291 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-08 00:47:44.299298 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-08 00:47:44.299308 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-08 00:47:44.299314 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-08 00:47:44.299320 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-08 00:47:44.299326 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-08 00:47:44.299337 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:44.299343 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:44.299349 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:44.299356 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:44.299362 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:44.299368 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:44.299374 | orchestrator | 2025-09-08 00:47:44.299380 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-08 00:47:44.299386 | orchestrator | Monday 08 September 2025 00:47:25 +0000 (0:00:08.738) 0:00:56.443 ****** 2025-09-08 00:47:44.299393 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-08 00:47:44.299400 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:44.299406 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-08 00:47:44.299412 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:44.299418 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-08 00:47:44.299424 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:44.299431 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-08 00:47:44.299438 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-08 00:47:44.299444 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-08 00:47:44.299450 | orchestrator | 2025-09-08 00:47:44.299456 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-08 00:47:44.299462 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:03.545) 0:00:59.988 ****** 2025-09-08 00:47:44.299469 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-08 00:47:44.299475 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:44.299481 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-08 00:47:44.299487 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:44.299493 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-08 00:47:44.299500 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:44.299506 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-08 00:47:44.299512 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-08 00:47:44.299518 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-08 00:47:44.299525 | orchestrator | 2025-09-08 00:47:44.299531 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-08 00:47:44.299537 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:03.720) 0:01:03.709 ****** 2025-09-08 00:47:44.299543 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:44.299549 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:44.299555 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:44.299562 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:44.299568 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:44.299574 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:44.299580 | orchestrator | 2025-09-08 00:47:44.299586 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:47:44.299610 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:47:44.299618 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:47:44.299635 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:47:44.299642 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:47:44.299648 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:47:44.299654 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:47:44.299660 | orchestrator | 2025-09-08 00:47:44.299667 | orchestrator | 2025-09-08 00:47:44.299673 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:47:44.299679 | orchestrator | Monday 08 September 2025 00:47:40 +0000 (0:00:08.262) 0:01:11.972 ****** 2025-09-08 00:47:44.299689 | orchestrator | =============================================================================== 2025-09-08 00:47:44.299695 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.63s 2025-09-08 00:47:44.299701 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.11s 2025-09-08 00:47:44.299707 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.74s 2025-09-08 00:47:44.299714 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.32s 2025-09-08 00:47:44.299720 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.72s 2025-09-08 00:47:44.299726 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.55s 2025-09-08 00:47:44.299732 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.28s 2025-09-08 00:47:44.299738 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.23s 2025-09-08 00:47:44.299744 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.16s 2025-09-08 00:47:44.299750 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.10s 2025-09-08 00:47:44.299756 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.04s 2025-09-08 00:47:44.299762 | orchestrator | module-load : Load modules ---------------------------------------------- 2.03s 2025-09-08 00:47:44.299769 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.86s 2025-09-08 00:47:44.299775 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.70s 2025-09-08 00:47:44.299781 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.28s 2025-09-08 00:47:44.299787 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2025-09-08 00:47:44.299793 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-09-08 00:47:44.299799 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2025-09-08 00:47:47.333402 | orchestrator | 2025-09-08 00:47:47 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:47.333668 | orchestrator | 2025-09-08 00:47:47 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:47:47.334399 | orchestrator | 2025-09-08 00:47:47 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:47.335183 | orchestrator | 2025-09-08 00:47:47 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:47.335238 | orchestrator | 2025-09-08 00:47:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:50.377242 | orchestrator | 2025-09-08 00:47:50 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:50.381802 | orchestrator | 2025-09-08 00:47:50 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:47:50.382117 | orchestrator | 2025-09-08 00:47:50 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:50.382423 | orchestrator | 2025-09-08 00:47:50 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:50.382445 | orchestrator | 2025-09-08 00:47:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:53.424895 | orchestrator | 2025-09-08 00:47:53 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:53.425011 | orchestrator | 2025-09-08 00:47:53 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:47:53.426743 | orchestrator | 2025-09-08 00:47:53 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:53.429799 | orchestrator | 2025-09-08 00:47:53 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:53.429833 | orchestrator | 2025-09-08 00:47:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:56.466215 | orchestrator | 2025-09-08 00:47:56 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:56.466505 | orchestrator | 2025-09-08 00:47:56 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:47:56.468866 | orchestrator | 2025-09-08 00:47:56 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:56.469771 | orchestrator | 2025-09-08 00:47:56 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:56.469794 | orchestrator | 2025-09-08 00:47:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:59.572704 | orchestrator | 2025-09-08 00:47:59 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:47:59.575954 | orchestrator | 2025-09-08 00:47:59 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:47:59.579987 | orchestrator | 2025-09-08 00:47:59 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:47:59.581085 | orchestrator | 2025-09-08 00:47:59 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:47:59.581329 | orchestrator | 2025-09-08 00:47:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:02.652027 | orchestrator | 2025-09-08 00:48:02 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:02.652307 | orchestrator | 2025-09-08 00:48:02 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:02.654368 | orchestrator | 2025-09-08 00:48:02 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:02.654974 | orchestrator | 2025-09-08 00:48:02 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:02.655169 | orchestrator | 2025-09-08 00:48:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:05.704502 | orchestrator | 2025-09-08 00:48:05 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:05.705623 | orchestrator | 2025-09-08 00:48:05 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:05.707044 | orchestrator | 2025-09-08 00:48:05 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:05.708687 | orchestrator | 2025-09-08 00:48:05 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:05.708708 | orchestrator | 2025-09-08 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:08.756692 | orchestrator | 2025-09-08 00:48:08 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:08.757703 | orchestrator | 2025-09-08 00:48:08 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:08.758371 | orchestrator | 2025-09-08 00:48:08 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:08.759165 | orchestrator | 2025-09-08 00:48:08 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:08.759194 | orchestrator | 2025-09-08 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:11.809670 | orchestrator | 2025-09-08 00:48:11 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:11.809997 | orchestrator | 2025-09-08 00:48:11 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:11.810961 | orchestrator | 2025-09-08 00:48:11 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:11.812521 | orchestrator | 2025-09-08 00:48:11 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:11.812541 | orchestrator | 2025-09-08 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:14.842976 | orchestrator | 2025-09-08 00:48:14 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:14.843474 | orchestrator | 2025-09-08 00:48:14 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:14.844527 | orchestrator | 2025-09-08 00:48:14 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:14.845543 | orchestrator | 2025-09-08 00:48:14 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:14.845712 | orchestrator | 2025-09-08 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:17.904770 | orchestrator | 2025-09-08 00:48:17 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:17.904880 | orchestrator | 2025-09-08 00:48:17 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:17.905265 | orchestrator | 2025-09-08 00:48:17 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:17.907791 | orchestrator | 2025-09-08 00:48:17 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:17.907896 | orchestrator | 2025-09-08 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:20.946735 | orchestrator | 2025-09-08 00:48:20 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:20.946847 | orchestrator | 2025-09-08 00:48:20 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:20.947686 | orchestrator | 2025-09-08 00:48:20 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:20.950738 | orchestrator | 2025-09-08 00:48:20 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:20.950764 | orchestrator | 2025-09-08 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:23.982578 | orchestrator | 2025-09-08 00:48:23 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:23.982848 | orchestrator | 2025-09-08 00:48:23 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:23.984291 | orchestrator | 2025-09-08 00:48:23 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:23.985168 | orchestrator | 2025-09-08 00:48:23 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:23.985216 | orchestrator | 2025-09-08 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:27.030294 | orchestrator | 2025-09-08 00:48:27 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:27.032018 | orchestrator | 2025-09-08 00:48:27 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:27.032053 | orchestrator | 2025-09-08 00:48:27 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:27.032782 | orchestrator | 2025-09-08 00:48:27 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:27.032902 | orchestrator | 2025-09-08 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:30.082695 | orchestrator | 2025-09-08 00:48:30 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:30.084628 | orchestrator | 2025-09-08 00:48:30 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:30.085431 | orchestrator | 2025-09-08 00:48:30 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:30.087033 | orchestrator | 2025-09-08 00:48:30 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:30.087132 | orchestrator | 2025-09-08 00:48:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:33.132490 | orchestrator | 2025-09-08 00:48:33 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:33.135645 | orchestrator | 2025-09-08 00:48:33 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:33.137226 | orchestrator | 2025-09-08 00:48:33 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:33.139677 | orchestrator | 2025-09-08 00:48:33 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:33.140436 | orchestrator | 2025-09-08 00:48:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:36.183944 | orchestrator | 2025-09-08 00:48:36 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:36.185975 | orchestrator | 2025-09-08 00:48:36 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:36.188029 | orchestrator | 2025-09-08 00:48:36 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:36.189469 | orchestrator | 2025-09-08 00:48:36 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:36.189494 | orchestrator | 2025-09-08 00:48:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:39.226310 | orchestrator | 2025-09-08 00:48:39 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:39.227089 | orchestrator | 2025-09-08 00:48:39 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:39.229523 | orchestrator | 2025-09-08 00:48:39 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:39.231330 | orchestrator | 2025-09-08 00:48:39 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:39.231572 | orchestrator | 2025-09-08 00:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:42.272689 | orchestrator | 2025-09-08 00:48:42 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:42.273479 | orchestrator | 2025-09-08 00:48:42 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:42.274496 | orchestrator | 2025-09-08 00:48:42 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:42.275632 | orchestrator | 2025-09-08 00:48:42 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:42.275944 | orchestrator | 2025-09-08 00:48:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:45.319588 | orchestrator | 2025-09-08 00:48:45 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:45.319737 | orchestrator | 2025-09-08 00:48:45 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:45.321495 | orchestrator | 2025-09-08 00:48:45 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:45.322179 | orchestrator | 2025-09-08 00:48:45 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:45.322288 | orchestrator | 2025-09-08 00:48:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:48.385588 | orchestrator | 2025-09-08 00:48:48 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:48.386974 | orchestrator | 2025-09-08 00:48:48 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:48.388516 | orchestrator | 2025-09-08 00:48:48 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:48.390392 | orchestrator | 2025-09-08 00:48:48 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:48.390418 | orchestrator | 2025-09-08 00:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:51.444453 | orchestrator | 2025-09-08 00:48:51 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:51.445667 | orchestrator | 2025-09-08 00:48:51 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:51.447312 | orchestrator | 2025-09-08 00:48:51 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:51.448999 | orchestrator | 2025-09-08 00:48:51 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:51.449022 | orchestrator | 2025-09-08 00:48:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:54.496831 | orchestrator | 2025-09-08 00:48:54 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:54.498467 | orchestrator | 2025-09-08 00:48:54 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:54.500303 | orchestrator | 2025-09-08 00:48:54 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:54.502403 | orchestrator | 2025-09-08 00:48:54 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:54.502427 | orchestrator | 2025-09-08 00:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:57.532937 | orchestrator | 2025-09-08 00:48:57 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:48:57.533362 | orchestrator | 2025-09-08 00:48:57 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:48:57.535162 | orchestrator | 2025-09-08 00:48:57 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:48:57.535210 | orchestrator | 2025-09-08 00:48:57 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:48:57.535224 | orchestrator | 2025-09-08 00:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:00.572429 | orchestrator | 2025-09-08 00:49:00 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:00.573226 | orchestrator | 2025-09-08 00:49:00 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:00.576529 | orchestrator | 2025-09-08 00:49:00 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:00.577538 | orchestrator | 2025-09-08 00:49:00 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:49:00.577697 | orchestrator | 2025-09-08 00:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:03.621466 | orchestrator | 2025-09-08 00:49:03 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:03.622200 | orchestrator | 2025-09-08 00:49:03 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:03.625471 | orchestrator | 2025-09-08 00:49:03 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:03.628345 | orchestrator | 2025-09-08 00:49:03 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:49:03.628712 | orchestrator | 2025-09-08 00:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:06.664425 | orchestrator | 2025-09-08 00:49:06 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:06.665151 | orchestrator | 2025-09-08 00:49:06 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:06.666833 | orchestrator | 2025-09-08 00:49:06 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:06.670665 | orchestrator | 2025-09-08 00:49:06 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:49:06.671301 | orchestrator | 2025-09-08 00:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:09.709157 | orchestrator | 2025-09-08 00:49:09 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:09.712548 | orchestrator | 2025-09-08 00:49:09 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:09.714877 | orchestrator | 2025-09-08 00:49:09 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:09.718725 | orchestrator | 2025-09-08 00:49:09 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:49:09.719125 | orchestrator | 2025-09-08 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:12.760874 | orchestrator | 2025-09-08 00:49:12 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:12.762871 | orchestrator | 2025-09-08 00:49:12 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:12.763887 | orchestrator | 2025-09-08 00:49:12 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:12.765141 | orchestrator | 2025-09-08 00:49:12 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:49:12.765164 | orchestrator | 2025-09-08 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:15.808025 | orchestrator | 2025-09-08 00:49:15 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:15.808813 | orchestrator | 2025-09-08 00:49:15 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:15.810291 | orchestrator | 2025-09-08 00:49:15 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:15.811394 | orchestrator | 2025-09-08 00:49:15 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state STARTED 2025-09-08 00:49:15.811452 | orchestrator | 2025-09-08 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:18.853867 | orchestrator | 2025-09-08 00:49:18 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:18.855775 | orchestrator | 2025-09-08 00:49:18 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:18.857326 | orchestrator | 2025-09-08 00:49:18 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:18.859808 | orchestrator | 2025-09-08 00:49:18 | INFO  | Task be596135-2f2c-43d2-b56a-6cd6ed12fc9f is in state SUCCESS 2025-09-08 00:49:18.859833 | orchestrator | 2025-09-08 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:18.860856 | orchestrator | 2025-09-08 00:49:18.860886 | orchestrator | 2025-09-08 00:49:18.860898 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-08 00:49:18.860909 | orchestrator | 2025-09-08 00:49:18.860920 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-08 00:49:18.860932 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.283) 0:00:00.283 ****** 2025-09-08 00:49:18.860944 | orchestrator | ok: [localhost] => { 2025-09-08 00:49:18.860958 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-08 00:49:18.860970 | orchestrator | } 2025-09-08 00:49:18.860981 | orchestrator | 2025-09-08 00:49:18.860992 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-08 00:49:18.861003 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.046) 0:00:00.329 ****** 2025-09-08 00:49:18.861015 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-08 00:49:18.861028 | orchestrator | ...ignoring 2025-09-08 00:49:18.861040 | orchestrator | 2025-09-08 00:49:18.861051 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-08 00:49:18.861062 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:02.821) 0:00:03.151 ****** 2025-09-08 00:49:18.861073 | orchestrator | skipping: [localhost] 2025-09-08 00:49:18.861084 | orchestrator | 2025-09-08 00:49:18.861094 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-08 00:49:18.861106 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.182) 0:00:03.333 ****** 2025-09-08 00:49:18.861116 | orchestrator | ok: [localhost] 2025-09-08 00:49:18.861127 | orchestrator | 2025-09-08 00:49:18.861138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:49:18.861149 | orchestrator | 2025-09-08 00:49:18.861160 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:49:18.861171 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.428) 0:00:03.762 ****** 2025-09-08 00:49:18.861183 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:18.861194 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:18.861205 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:18.861215 | orchestrator | 2025-09-08 00:49:18.861226 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:49:18.861237 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.818) 0:00:04.580 ****** 2025-09-08 00:49:18.861248 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-08 00:49:18.861259 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-08 00:49:18.861270 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-08 00:49:18.861280 | orchestrator | 2025-09-08 00:49:18.861291 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-08 00:49:18.861302 | orchestrator | 2025-09-08 00:49:18.861313 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-08 00:49:18.861356 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:01.228) 0:00:05.809 ****** 2025-09-08 00:49:18.861368 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:18.861379 | orchestrator | 2025-09-08 00:49:18.861390 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-08 00:49:18.861400 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:00.594) 0:00:06.403 ****** 2025-09-08 00:49:18.861411 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:18.861422 | orchestrator | 2025-09-08 00:49:18.861432 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-08 00:49:18.861443 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.910) 0:00:07.314 ****** 2025-09-08 00:49:18.861456 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.861469 | orchestrator | 2025-09-08 00:49:18.861482 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-08 00:49:18.861494 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.384) 0:00:07.699 ****** 2025-09-08 00:49:18.861507 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.861520 | orchestrator | 2025-09-08 00:49:18.861533 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-08 00:49:18.861546 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.416) 0:00:08.116 ****** 2025-09-08 00:49:18.861558 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.861571 | orchestrator | 2025-09-08 00:49:18.861583 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-08 00:49:18.861618 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.335) 0:00:08.452 ****** 2025-09-08 00:49:18.861631 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.861645 | orchestrator | 2025-09-08 00:49:18.861658 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-08 00:49:18.861670 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.329) 0:00:08.781 ****** 2025-09-08 00:49:18.861682 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:18.861695 | orchestrator | 2025-09-08 00:49:18.861707 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-08 00:49:18.861720 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:00.789) 0:00:09.571 ****** 2025-09-08 00:49:18.861732 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:18.861744 | orchestrator | 2025-09-08 00:49:18.861757 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-08 00:49:18.861770 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.859) 0:00:10.431 ****** 2025-09-08 00:49:18.861782 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.861795 | orchestrator | 2025-09-08 00:49:18.861807 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-08 00:49:18.861817 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.441) 0:00:10.872 ****** 2025-09-08 00:49:18.861828 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.861839 | orchestrator | 2025-09-08 00:49:18.861859 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-08 00:49:18.861870 | orchestrator | Monday 08 September 2025 00:47:02 +0000 (0:00:00.961) 0:00:11.834 ****** 2025-09-08 00:49:18.861985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862141 | orchestrator | 2025-09-08 00:49:18.862152 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-08 00:49:18.862163 | orchestrator | Monday 08 September 2025 00:47:04 +0000 (0:00:01.394) 0:00:13.229 ****** 2025-09-08 00:49:18.862189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862242 | orchestrator | 2025-09-08 00:49:18.862252 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-08 00:49:18.862263 | orchestrator | Monday 08 September 2025 00:47:07 +0000 (0:00:03.894) 0:00:17.124 ****** 2025-09-08 00:49:18.862274 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-08 00:49:18.862285 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-08 00:49:18.862296 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-08 00:49:18.862307 | orchestrator | 2025-09-08 00:49:18.862318 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-08 00:49:18.862329 | orchestrator | Monday 08 September 2025 00:47:11 +0000 (0:00:03.308) 0:00:20.432 ****** 2025-09-08 00:49:18.862339 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-08 00:49:18.862350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-08 00:49:18.862361 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-08 00:49:18.862371 | orchestrator | 2025-09-08 00:49:18.862382 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-08 00:49:18.862393 | orchestrator | Monday 08 September 2025 00:47:13 +0000 (0:00:02.652) 0:00:23.085 ****** 2025-09-08 00:49:18.862404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-08 00:49:18.862414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-08 00:49:18.862425 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-08 00:49:18.862436 | orchestrator | 2025-09-08 00:49:18.862446 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-08 00:49:18.862457 | orchestrator | Monday 08 September 2025 00:47:16 +0000 (0:00:02.285) 0:00:25.370 ****** 2025-09-08 00:49:18.862482 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-08 00:49:18.862493 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-08 00:49:18.862504 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-08 00:49:18.862515 | orchestrator | 2025-09-08 00:49:18.862526 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-08 00:49:18.862536 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:03.831) 0:00:29.201 ****** 2025-09-08 00:49:18.862547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-08 00:49:18.862558 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-08 00:49:18.862568 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-08 00:49:18.862579 | orchestrator | 2025-09-08 00:49:18.862610 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-08 00:49:18.862622 | orchestrator | Monday 08 September 2025 00:47:21 +0000 (0:00:01.980) 0:00:31.182 ****** 2025-09-08 00:49:18.862632 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-08 00:49:18.862643 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-08 00:49:18.862660 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-08 00:49:18.862671 | orchestrator | 2025-09-08 00:49:18.862682 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-08 00:49:18.862692 | orchestrator | Monday 08 September 2025 00:47:23 +0000 (0:00:01.585) 0:00:32.768 ****** 2025-09-08 00:49:18.862703 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.862714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:18.862725 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:18.862735 | orchestrator | 2025-09-08 00:49:18.862746 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-08 00:49:18.862757 | orchestrator | Monday 08 September 2025 00:47:23 +0000 (0:00:00.422) 0:00:33.190 ****** 2025-09-08 00:49:18.862769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:18.862824 | orchestrator | 2025-09-08 00:49:18.862835 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-08 00:49:18.862845 | orchestrator | Monday 08 September 2025 00:47:26 +0000 (0:00:02.608) 0:00:35.799 ****** 2025-09-08 00:49:18.862856 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:18.862867 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:18.862878 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:18.862888 | orchestrator | 2025-09-08 00:49:18.862899 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-08 00:49:18.862915 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.970) 0:00:36.770 ****** 2025-09-08 00:49:18.862926 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:18.862937 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:18.862948 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:18.862959 | orchestrator | 2025-09-08 00:49:18.862969 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-08 00:49:18.862980 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:07.809) 0:00:44.579 ****** 2025-09-08 00:49:18.862991 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:18.863001 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:18.863012 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:18.863023 | orchestrator | 2025-09-08 00:49:18.863034 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-08 00:49:18.863044 | orchestrator | 2025-09-08 00:49:18.863055 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-08 00:49:18.863066 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.537) 0:00:45.117 ****** 2025-09-08 00:49:18.863076 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:18.863087 | orchestrator | 2025-09-08 00:49:18.863098 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-08 00:49:18.863109 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.704) 0:00:45.822 ****** 2025-09-08 00:49:18.863120 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:18.863131 | orchestrator | 2025-09-08 00:49:18.863141 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-08 00:49:18.863152 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.253) 0:00:46.075 ****** 2025-09-08 00:49:18.863163 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:18.863173 | orchestrator | 2025-09-08 00:49:18.863184 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-08 00:49:18.863201 | orchestrator | Monday 08 September 2025 00:47:43 +0000 (0:00:06.726) 0:00:52.802 ****** 2025-09-08 00:49:18.863212 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:18.863223 | orchestrator | 2025-09-08 00:49:18.863233 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-08 00:49:18.863244 | orchestrator | 2025-09-08 00:49:18.863255 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-08 00:49:18.863265 | orchestrator | Monday 08 September 2025 00:48:35 +0000 (0:00:51.917) 0:01:44.719 ****** 2025-09-08 00:49:18.863276 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:18.863286 | orchestrator | 2025-09-08 00:49:18.863297 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-08 00:49:18.863308 | orchestrator | Monday 08 September 2025 00:48:36 +0000 (0:00:00.616) 0:01:45.336 ****** 2025-09-08 00:49:18.863318 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:18.863329 | orchestrator | 2025-09-08 00:49:18.863340 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-08 00:49:18.863350 | orchestrator | Monday 08 September 2025 00:48:36 +0000 (0:00:00.235) 0:01:45.572 ****** 2025-09-08 00:49:18.863361 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:18.863372 | orchestrator | 2025-09-08 00:49:18.863382 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-08 00:49:18.863393 | orchestrator | Monday 08 September 2025 00:48:38 +0000 (0:00:01.808) 0:01:47.381 ****** 2025-09-08 00:49:18.863403 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:18.863414 | orchestrator | 2025-09-08 00:49:18.863425 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-08 00:49:18.863435 | orchestrator | 2025-09-08 00:49:18.863446 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-08 00:49:18.863457 | orchestrator | Monday 08 September 2025 00:48:53 +0000 (0:00:15.736) 0:02:03.118 ****** 2025-09-08 00:49:18.863468 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:18.863478 | orchestrator | 2025-09-08 00:49:18.863489 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-08 00:49:18.863500 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:00.606) 0:02:03.725 ****** 2025-09-08 00:49:18.863510 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:18.863521 | orchestrator | 2025-09-08 00:49:18.863531 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-08 00:49:18.863542 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:00.244) 0:02:03.969 ****** 2025-09-08 00:49:18.863553 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:18.863563 | orchestrator | 2025-09-08 00:49:18.863574 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-08 00:49:18.863653 | orchestrator | Monday 08 September 2025 00:48:56 +0000 (0:00:01.724) 0:02:05.693 ****** 2025-09-08 00:49:18.863667 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:18.863678 | orchestrator | 2025-09-08 00:49:18.863688 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-08 00:49:18.863699 | orchestrator | 2025-09-08 00:49:18.863710 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-08 00:49:18.863720 | orchestrator | Monday 08 September 2025 00:49:13 +0000 (0:00:16.570) 0:02:22.264 ****** 2025-09-08 00:49:18.863731 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:18.863742 | orchestrator | 2025-09-08 00:49:18.863752 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-08 00:49:18.863763 | orchestrator | Monday 08 September 2025 00:49:13 +0000 (0:00:00.521) 0:02:22.786 ****** 2025-09-08 00:49:18.863774 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-08 00:49:18.863784 | orchestrator | enable_outward_rabbitmq_True 2025-09-08 00:49:18.863795 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-08 00:49:18.863805 | orchestrator | outward_rabbitmq_restart 2025-09-08 00:49:18.863824 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:18.863835 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:18.863845 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:18.863854 | orchestrator | 2025-09-08 00:49:18.863864 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-08 00:49:18.863873 | orchestrator | skipping: no hosts matched 2025-09-08 00:49:18.863883 | orchestrator | 2025-09-08 00:49:18.863892 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-08 00:49:18.863906 | orchestrator | skipping: no hosts matched 2025-09-08 00:49:18.863916 | orchestrator | 2025-09-08 00:49:18.863926 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-08 00:49:18.863935 | orchestrator | skipping: no hosts matched 2025-09-08 00:49:18.863945 | orchestrator | 2025-09-08 00:49:18.863954 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:49:18.863964 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-08 00:49:18.863976 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 00:49:18.863985 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:49:18.863995 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:49:18.864005 | orchestrator | 2025-09-08 00:49:18.864014 | orchestrator | 2025-09-08 00:49:18.864024 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:49:18.864033 | orchestrator | Monday 08 September 2025 00:49:15 +0000 (0:00:02.384) 0:02:25.171 ****** 2025-09-08 00:49:18.864043 | orchestrator | =============================================================================== 2025-09-08 00:49:18.864052 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.23s 2025-09-08 00:49:18.864062 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.26s 2025-09-08 00:49:18.864071 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.81s 2025-09-08 00:49:18.864081 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.89s 2025-09-08 00:49:18.864090 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.83s 2025-09-08 00:49:18.864100 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.31s 2025-09-08 00:49:18.864109 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.82s 2025-09-08 00:49:18.864118 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.65s 2025-09-08 00:49:18.864128 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.61s 2025-09-08 00:49:18.864137 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.39s 2025-09-08 00:49:18.864146 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.29s 2025-09-08 00:49:18.864156 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.98s 2025-09-08 00:49:18.864165 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2025-09-08 00:49:18.864175 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.59s 2025-09-08 00:49:18.864184 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.40s 2025-09-08 00:49:18.864193 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.23s 2025-09-08 00:49:18.864203 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.97s 2025-09-08 00:49:18.864212 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 0.96s 2025-09-08 00:49:18.864222 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.91s 2025-09-08 00:49:18.864238 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.86s 2025-09-08 00:49:21.893076 | orchestrator | 2025-09-08 00:49:21 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:21.893204 | orchestrator | 2025-09-08 00:49:21 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:21.894114 | orchestrator | 2025-09-08 00:49:21 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:21.894139 | orchestrator | 2025-09-08 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:24.939351 | orchestrator | 2025-09-08 00:49:24 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:24.939583 | orchestrator | 2025-09-08 00:49:24 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:24.940477 | orchestrator | 2025-09-08 00:49:24 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:24.943221 | orchestrator | 2025-09-08 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:27.982393 | orchestrator | 2025-09-08 00:49:27 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:27.982878 | orchestrator | 2025-09-08 00:49:27 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:27.983635 | orchestrator | 2025-09-08 00:49:27 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:27.983821 | orchestrator | 2025-09-08 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:31.014004 | orchestrator | 2025-09-08 00:49:31 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:31.015355 | orchestrator | 2025-09-08 00:49:31 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:31.018372 | orchestrator | 2025-09-08 00:49:31 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:31.019097 | orchestrator | 2025-09-08 00:49:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:34.070100 | orchestrator | 2025-09-08 00:49:34 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:34.071322 | orchestrator | 2025-09-08 00:49:34 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:34.073271 | orchestrator | 2025-09-08 00:49:34 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:34.073513 | orchestrator | 2025-09-08 00:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:37.109515 | orchestrator | 2025-09-08 00:49:37 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:37.112166 | orchestrator | 2025-09-08 00:49:37 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:37.113553 | orchestrator | 2025-09-08 00:49:37 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:37.114151 | orchestrator | 2025-09-08 00:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:40.162367 | orchestrator | 2025-09-08 00:49:40 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:40.165521 | orchestrator | 2025-09-08 00:49:40 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:40.169685 | orchestrator | 2025-09-08 00:49:40 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:40.169723 | orchestrator | 2025-09-08 00:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:43.211287 | orchestrator | 2025-09-08 00:49:43 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:43.212229 | orchestrator | 2025-09-08 00:49:43 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:43.213984 | orchestrator | 2025-09-08 00:49:43 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:43.214138 | orchestrator | 2025-09-08 00:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:46.257730 | orchestrator | 2025-09-08 00:49:46 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:46.261254 | orchestrator | 2025-09-08 00:49:46 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:46.263140 | orchestrator | 2025-09-08 00:49:46 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:46.263167 | orchestrator | 2025-09-08 00:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:49.313050 | orchestrator | 2025-09-08 00:49:49 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:49.316999 | orchestrator | 2025-09-08 00:49:49 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:49.319515 | orchestrator | 2025-09-08 00:49:49 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:49.319861 | orchestrator | 2025-09-08 00:49:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:52.364290 | orchestrator | 2025-09-08 00:49:52 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:52.364394 | orchestrator | 2025-09-08 00:49:52 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:52.366482 | orchestrator | 2025-09-08 00:49:52 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:52.366510 | orchestrator | 2025-09-08 00:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:55.400831 | orchestrator | 2025-09-08 00:49:55 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:55.403074 | orchestrator | 2025-09-08 00:49:55 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:55.406064 | orchestrator | 2025-09-08 00:49:55 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:55.406949 | orchestrator | 2025-09-08 00:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:58.446277 | orchestrator | 2025-09-08 00:49:58 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:49:58.446962 | orchestrator | 2025-09-08 00:49:58 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:49:58.447846 | orchestrator | 2025-09-08 00:49:58 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:49:58.447869 | orchestrator | 2025-09-08 00:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:01.494132 | orchestrator | 2025-09-08 00:50:01 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:01.494242 | orchestrator | 2025-09-08 00:50:01 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:50:01.494751 | orchestrator | 2025-09-08 00:50:01 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:01.494777 | orchestrator | 2025-09-08 00:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:04.539374 | orchestrator | 2025-09-08 00:50:04 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:04.539462 | orchestrator | 2025-09-08 00:50:04 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:50:04.539477 | orchestrator | 2025-09-08 00:50:04 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:04.539490 | orchestrator | 2025-09-08 00:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:07.569795 | orchestrator | 2025-09-08 00:50:07 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:07.571249 | orchestrator | 2025-09-08 00:50:07 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state STARTED 2025-09-08 00:50:07.573116 | orchestrator | 2025-09-08 00:50:07 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:07.573325 | orchestrator | 2025-09-08 00:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:10.619557 | orchestrator | 2025-09-08 00:50:10 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:10.622709 | orchestrator | 2025-09-08 00:50:10 | INFO  | Task c3a386ae-4b8c-41f2-a793-b058f7a42942 is in state SUCCESS 2025-09-08 00:50:10.623267 | orchestrator | 2025-09-08 00:50:10.626057 | orchestrator | 2025-09-08 00:50:10.626140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:50:10.626156 | orchestrator | 2025-09-08 00:50:10.626168 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:50:10.626179 | orchestrator | Monday 08 September 2025 00:47:45 +0000 (0:00:00.179) 0:00:00.179 ****** 2025-09-08 00:50:10.626190 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:50:10.626201 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:50:10.626212 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:50:10.626222 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.626233 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.626244 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.626254 | orchestrator | 2025-09-08 00:50:10.626265 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:50:10.626276 | orchestrator | Monday 08 September 2025 00:47:47 +0000 (0:00:01.401) 0:00:01.581 ****** 2025-09-08 00:50:10.626287 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-08 00:50:10.626299 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-08 00:50:10.626309 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-08 00:50:10.626320 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-08 00:50:10.626331 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-08 00:50:10.626342 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-08 00:50:10.626459 | orchestrator | 2025-09-08 00:50:10.626476 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-08 00:50:10.626487 | orchestrator | 2025-09-08 00:50:10.626497 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-08 00:50:10.626508 | orchestrator | Monday 08 September 2025 00:47:48 +0000 (0:00:01.413) 0:00:02.994 ****** 2025-09-08 00:50:10.626520 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:50:10.626532 | orchestrator | 2025-09-08 00:50:10.626971 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-08 00:50:10.626996 | orchestrator | Monday 08 September 2025 00:47:50 +0000 (0:00:01.224) 0:00:04.219 ****** 2025-09-08 00:50:10.627009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627119 | orchestrator | 2025-09-08 00:50:10.627146 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-08 00:50:10.627158 | orchestrator | Monday 08 September 2025 00:47:51 +0000 (0:00:01.351) 0:00:05.570 ****** 2025-09-08 00:50:10.627169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627247 | orchestrator | 2025-09-08 00:50:10.627258 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-08 00:50:10.627269 | orchestrator | Monday 08 September 2025 00:47:53 +0000 (0:00:01.725) 0:00:07.296 ****** 2025-09-08 00:50:10.627280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627396 | orchestrator | 2025-09-08 00:50:10.627407 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-08 00:50:10.627419 | orchestrator | Monday 08 September 2025 00:47:54 +0000 (0:00:01.197) 0:00:08.494 ****** 2025-09-08 00:50:10.627430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627502 | orchestrator | 2025-09-08 00:50:10.627518 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-08 00:50:10.627530 | orchestrator | Monday 08 September 2025 00:47:55 +0000 (0:00:01.586) 0:00:10.081 ****** 2025-09-08 00:50:10.627541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.627655 | orchestrator | 2025-09-08 00:50:10.627666 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-08 00:50:10.627677 | orchestrator | Monday 08 September 2025 00:47:57 +0000 (0:00:01.445) 0:00:11.526 ****** 2025-09-08 00:50:10.627688 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:50:10.627699 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:50:10.627709 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:50:10.627720 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.627730 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.627741 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.627752 | orchestrator | 2025-09-08 00:50:10.627762 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-08 00:50:10.627773 | orchestrator | Monday 08 September 2025 00:48:00 +0000 (0:00:02.786) 0:00:14.312 ****** 2025-09-08 00:50:10.627784 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-08 00:50:10.627795 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-08 00:50:10.627805 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-08 00:50:10.627816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-08 00:50:10.627827 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-08 00:50:10.627837 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-08 00:50:10.627848 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:50:10.627859 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:50:10.627875 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:50:10.627892 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:50:10.627903 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:50:10.627914 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:50:10.627925 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:50:10.627936 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:50:10.627947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:50:10.627958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:50:10.627969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:50:10.627980 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:50:10.627991 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:50:10.628002 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:50:10.628013 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:50:10.628024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:50:10.628034 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:50:10.628045 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:50:10.628055 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:50:10.628066 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:50:10.628081 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:50:10.628092 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:50:10.628103 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:50:10.628113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:50:10.628124 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:50:10.628135 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:50:10.628145 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:50:10.628156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:50:10.628167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:50:10.628177 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:50:10.628188 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-08 00:50:10.628199 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-08 00:50:10.628216 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-08 00:50:10.628226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-08 00:50:10.628237 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-08 00:50:10.628247 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-08 00:50:10.628258 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-08 00:50:10.628269 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-08 00:50:10.628286 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-08 00:50:10.628297 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-08 00:50:10.628307 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-08 00:50:10.628318 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-08 00:50:10.628329 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-08 00:50:10.628339 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-08 00:50:10.628350 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-08 00:50:10.628361 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-08 00:50:10.628372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-08 00:50:10.628383 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-08 00:50:10.628393 | orchestrator | 2025-09-08 00:50:10.628404 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:50:10.628415 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:19.200) 0:00:33.512 ****** 2025-09-08 00:50:10.628426 | orchestrator | 2025-09-08 00:50:10.628437 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:50:10.628447 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.257) 0:00:33.769 ****** 2025-09-08 00:50:10.628458 | orchestrator | 2025-09-08 00:50:10.628469 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:50:10.628479 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.065) 0:00:33.834 ****** 2025-09-08 00:50:10.628490 | orchestrator | 2025-09-08 00:50:10.628500 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:50:10.628511 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.066) 0:00:33.901 ****** 2025-09-08 00:50:10.628522 | orchestrator | 2025-09-08 00:50:10.628532 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:50:10.628543 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.065) 0:00:33.967 ****** 2025-09-08 00:50:10.628554 | orchestrator | 2025-09-08 00:50:10.628582 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:50:10.628594 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.063) 0:00:34.030 ****** 2025-09-08 00:50:10.628605 | orchestrator | 2025-09-08 00:50:10.628622 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-08 00:50:10.628633 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.067) 0:00:34.097 ****** 2025-09-08 00:50:10.628644 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:50:10.628654 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:50:10.628665 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.628676 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.628687 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:50:10.628697 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.628708 | orchestrator | 2025-09-08 00:50:10.628719 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-08 00:50:10.628730 | orchestrator | Monday 08 September 2025 00:48:21 +0000 (0:00:01.574) 0:00:35.672 ****** 2025-09-08 00:50:10.628741 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.628752 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:50:10.628763 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:50:10.628773 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.628784 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:50:10.628795 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.628805 | orchestrator | 2025-09-08 00:50:10.628816 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-08 00:50:10.628827 | orchestrator | 2025-09-08 00:50:10.628838 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-08 00:50:10.628848 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:32.701) 0:01:08.374 ****** 2025-09-08 00:50:10.628859 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:50:10.628870 | orchestrator | 2025-09-08 00:50:10.628881 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-08 00:50:10.628891 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:00.744) 0:01:09.118 ****** 2025-09-08 00:50:10.628902 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:50:10.628913 | orchestrator | 2025-09-08 00:50:10.628924 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-08 00:50:10.628934 | orchestrator | Monday 08 September 2025 00:48:55 +0000 (0:00:00.587) 0:01:09.706 ****** 2025-09-08 00:50:10.628945 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.628956 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.628967 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.628977 | orchestrator | 2025-09-08 00:50:10.628988 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-08 00:50:10.628999 | orchestrator | Monday 08 September 2025 00:48:56 +0000 (0:00:00.985) 0:01:10.692 ****** 2025-09-08 00:50:10.629009 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.629020 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.629031 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.629041 | orchestrator | 2025-09-08 00:50:10.629059 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-08 00:50:10.629070 | orchestrator | Monday 08 September 2025 00:48:56 +0000 (0:00:00.400) 0:01:11.092 ****** 2025-09-08 00:50:10.629080 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.629091 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.629102 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.629113 | orchestrator | 2025-09-08 00:50:10.629123 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-08 00:50:10.629134 | orchestrator | Monday 08 September 2025 00:48:57 +0000 (0:00:00.395) 0:01:11.487 ****** 2025-09-08 00:50:10.629145 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.629156 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.629166 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.629177 | orchestrator | 2025-09-08 00:50:10.629188 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-08 00:50:10.629199 | orchestrator | Monday 08 September 2025 00:48:57 +0000 (0:00:00.646) 0:01:12.134 ****** 2025-09-08 00:50:10.629215 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.629226 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.629236 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.629247 | orchestrator | 2025-09-08 00:50:10.629258 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-08 00:50:10.629269 | orchestrator | Monday 08 September 2025 00:48:58 +0000 (0:00:00.572) 0:01:12.707 ****** 2025-09-08 00:50:10.629280 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629290 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629301 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.629312 | orchestrator | 2025-09-08 00:50:10.629322 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-08 00:50:10.629333 | orchestrator | Monday 08 September 2025 00:48:58 +0000 (0:00:00.296) 0:01:13.003 ****** 2025-09-08 00:50:10.629344 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629355 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629365 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.629376 | orchestrator | 2025-09-08 00:50:10.629387 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-08 00:50:10.629397 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.285) 0:01:13.288 ****** 2025-09-08 00:50:10.629408 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629419 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629429 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.629440 | orchestrator | 2025-09-08 00:50:10.629451 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-08 00:50:10.629462 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.296) 0:01:13.585 ****** 2025-09-08 00:50:10.629473 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629483 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629494 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.629504 | orchestrator | 2025-09-08 00:50:10.629515 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-08 00:50:10.629526 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.483) 0:01:14.068 ****** 2025-09-08 00:50:10.629547 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629559 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629697 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.629802 | orchestrator | 2025-09-08 00:50:10.629818 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-08 00:50:10.629831 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:00.302) 0:01:14.371 ****** 2025-09-08 00:50:10.629842 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629853 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629863 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.629874 | orchestrator | 2025-09-08 00:50:10.629886 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-08 00:50:10.629896 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:00.310) 0:01:14.682 ****** 2025-09-08 00:50:10.629907 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629917 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629928 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.629938 | orchestrator | 2025-09-08 00:50:10.629949 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-08 00:50:10.629960 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:00.293) 0:01:14.976 ****** 2025-09-08 00:50:10.629970 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.629981 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.629991 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630002 | orchestrator | 2025-09-08 00:50:10.630013 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-08 00:50:10.630069 | orchestrator | Monday 08 September 2025 00:49:01 +0000 (0:00:00.346) 0:01:15.322 ****** 2025-09-08 00:50:10.630080 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630117 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630129 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630140 | orchestrator | 2025-09-08 00:50:10.630150 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-08 00:50:10.630161 | orchestrator | Monday 08 September 2025 00:49:01 +0000 (0:00:00.536) 0:01:15.859 ****** 2025-09-08 00:50:10.630172 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630183 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630193 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630204 | orchestrator | 2025-09-08 00:50:10.630215 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-08 00:50:10.630225 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:00.416) 0:01:16.275 ****** 2025-09-08 00:50:10.630236 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630247 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630257 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630268 | orchestrator | 2025-09-08 00:50:10.630278 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-08 00:50:10.630289 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:00.299) 0:01:16.575 ****** 2025-09-08 00:50:10.630300 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630311 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630349 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630361 | orchestrator | 2025-09-08 00:50:10.630372 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-08 00:50:10.630382 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:00.277) 0:01:16.853 ****** 2025-09-08 00:50:10.630394 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:50:10.630405 | orchestrator | 2025-09-08 00:50:10.630416 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-08 00:50:10.630427 | orchestrator | Monday 08 September 2025 00:49:03 +0000 (0:00:00.752) 0:01:17.606 ****** 2025-09-08 00:50:10.630437 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.630449 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.630459 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.630470 | orchestrator | 2025-09-08 00:50:10.630481 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-08 00:50:10.630492 | orchestrator | Monday 08 September 2025 00:49:03 +0000 (0:00:00.473) 0:01:18.080 ****** 2025-09-08 00:50:10.630503 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.630513 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.630524 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.630535 | orchestrator | 2025-09-08 00:50:10.630545 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-08 00:50:10.630556 | orchestrator | Monday 08 September 2025 00:49:04 +0000 (0:00:00.474) 0:01:18.554 ****** 2025-09-08 00:50:10.630567 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630608 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630619 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630630 | orchestrator | 2025-09-08 00:50:10.630641 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-08 00:50:10.630652 | orchestrator | Monday 08 September 2025 00:49:04 +0000 (0:00:00.505) 0:01:19.059 ****** 2025-09-08 00:50:10.630662 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630673 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630684 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630694 | orchestrator | 2025-09-08 00:50:10.630705 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-08 00:50:10.630716 | orchestrator | Monday 08 September 2025 00:49:05 +0000 (0:00:00.355) 0:01:19.415 ****** 2025-09-08 00:50:10.630727 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630738 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630756 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630767 | orchestrator | 2025-09-08 00:50:10.630778 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-08 00:50:10.630788 | orchestrator | Monday 08 September 2025 00:49:05 +0000 (0:00:00.334) 0:01:19.750 ****** 2025-09-08 00:50:10.630799 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630810 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630820 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630831 | orchestrator | 2025-09-08 00:50:10.630842 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-08 00:50:10.630853 | orchestrator | Monday 08 September 2025 00:49:05 +0000 (0:00:00.435) 0:01:20.185 ****** 2025-09-08 00:50:10.630864 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630875 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630885 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630896 | orchestrator | 2025-09-08 00:50:10.630907 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-08 00:50:10.630918 | orchestrator | Monday 08 September 2025 00:49:06 +0000 (0:00:00.578) 0:01:20.763 ****** 2025-09-08 00:50:10.630928 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.630939 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.630949 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.630960 | orchestrator | 2025-09-08 00:50:10.630971 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-08 00:50:10.630982 | orchestrator | Monday 08 September 2025 00:49:06 +0000 (0:00:00.428) 0:01:21.191 ****** 2025-09-08 00:50:10.630994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631164 | orchestrator | 2025-09-08 00:50:10.631176 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-08 00:50:10.631187 | orchestrator | Monday 08 September 2025 00:49:08 +0000 (0:00:01.526) 0:01:22.719 ****** 2025-09-08 00:50:10.631198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631311 | orchestrator | 2025-09-08 00:50:10.631322 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-08 00:50:10.631333 | orchestrator | Monday 08 September 2025 00:49:12 +0000 (0:00:03.880) 0:01:26.599 ****** 2025-09-08 00:50:10.631349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.631472 | orchestrator | 2025-09-08 00:50:10.631483 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:50:10.631494 | orchestrator | Monday 08 September 2025 00:49:14 +0000 (0:00:02.000) 0:01:28.599 ****** 2025-09-08 00:50:10.631504 | orchestrator | 2025-09-08 00:50:10.631515 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:50:10.631526 | orchestrator | Monday 08 September 2025 00:49:14 +0000 (0:00:00.254) 0:01:28.854 ****** 2025-09-08 00:50:10.631537 | orchestrator | 2025-09-08 00:50:10.631547 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:50:10.631558 | orchestrator | Monday 08 September 2025 00:49:14 +0000 (0:00:00.064) 0:01:28.918 ****** 2025-09-08 00:50:10.631584 | orchestrator | 2025-09-08 00:50:10.631595 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-08 00:50:10.631606 | orchestrator | Monday 08 September 2025 00:49:14 +0000 (0:00:00.065) 0:01:28.984 ****** 2025-09-08 00:50:10.631617 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.631628 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.631639 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.631649 | orchestrator | 2025-09-08 00:50:10.631665 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-08 00:50:10.631676 | orchestrator | Monday 08 September 2025 00:49:22 +0000 (0:00:07.523) 0:01:36.507 ****** 2025-09-08 00:50:10.631687 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.631697 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.631708 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.631719 | orchestrator | 2025-09-08 00:50:10.631730 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-08 00:50:10.631740 | orchestrator | Monday 08 September 2025 00:49:24 +0000 (0:00:02.562) 0:01:39.070 ****** 2025-09-08 00:50:10.631751 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.631761 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.631772 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.631783 | orchestrator | 2025-09-08 00:50:10.631793 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-08 00:50:10.631804 | orchestrator | Monday 08 September 2025 00:49:27 +0000 (0:00:02.678) 0:01:41.749 ****** 2025-09-08 00:50:10.631815 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.631825 | orchestrator | 2025-09-08 00:50:10.631836 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-08 00:50:10.631847 | orchestrator | Monday 08 September 2025 00:49:27 +0000 (0:00:00.141) 0:01:41.890 ****** 2025-09-08 00:50:10.631857 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.631868 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.631879 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.631897 | orchestrator | 2025-09-08 00:50:10.631908 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-08 00:50:10.631919 | orchestrator | Monday 08 September 2025 00:49:29 +0000 (0:00:01.641) 0:01:43.531 ****** 2025-09-08 00:50:10.631930 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.631940 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.631951 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.631962 | orchestrator | 2025-09-08 00:50:10.631972 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-08 00:50:10.631983 | orchestrator | Monday 08 September 2025 00:49:30 +0000 (0:00:00.770) 0:01:44.302 ****** 2025-09-08 00:50:10.631994 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.632004 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.632015 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.632026 | orchestrator | 2025-09-08 00:50:10.632036 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-08 00:50:10.632047 | orchestrator | Monday 08 September 2025 00:49:30 +0000 (0:00:00.876) 0:01:45.178 ****** 2025-09-08 00:50:10.632057 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.632068 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.632079 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.632089 | orchestrator | 2025-09-08 00:50:10.632100 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-08 00:50:10.632111 | orchestrator | Monday 08 September 2025 00:49:31 +0000 (0:00:00.793) 0:01:45.972 ****** 2025-09-08 00:50:10.632121 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.632132 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.632149 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.632160 | orchestrator | 2025-09-08 00:50:10.632171 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-08 00:50:10.632181 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:01.647) 0:01:47.619 ****** 2025-09-08 00:50:10.632192 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.632203 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.632213 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.632224 | orchestrator | 2025-09-08 00:50:10.632235 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-08 00:50:10.632245 | orchestrator | Monday 08 September 2025 00:49:34 +0000 (0:00:00.814) 0:01:48.434 ****** 2025-09-08 00:50:10.632256 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.632267 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.632278 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.632288 | orchestrator | 2025-09-08 00:50:10.632299 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-08 00:50:10.632310 | orchestrator | Monday 08 September 2025 00:49:34 +0000 (0:00:00.298) 0:01:48.733 ****** 2025-09-08 00:50:10.632321 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632332 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632344 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632360 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632380 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632392 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632414 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632432 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632443 | orchestrator | 2025-09-08 00:50:10.632455 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-08 00:50:10.632466 | orchestrator | Monday 08 September 2025 00:49:35 +0000 (0:00:01.475) 0:01:50.208 ****** 2025-09-08 00:50:10.632476 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632488 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632499 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632510 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632603 | orchestrator | 2025-09-08 00:50:10.632614 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-08 00:50:10.632625 | orchestrator | Monday 08 September 2025 00:49:40 +0000 (0:00:04.983) 0:01:55.192 ****** 2025-09-08 00:50:10.632643 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632700 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632750 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:50:10.632761 | orchestrator | 2025-09-08 00:50:10.632772 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:50:10.632783 | orchestrator | Monday 08 September 2025 00:49:43 +0000 (0:00:02.877) 0:01:58.069 ****** 2025-09-08 00:50:10.632794 | orchestrator | 2025-09-08 00:50:10.632805 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:50:10.632815 | orchestrator | Monday 08 September 2025 00:49:43 +0000 (0:00:00.071) 0:01:58.141 ****** 2025-09-08 00:50:10.632826 | orchestrator | 2025-09-08 00:50:10.632837 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:50:10.632847 | orchestrator | Monday 08 September 2025 00:49:43 +0000 (0:00:00.066) 0:01:58.207 ****** 2025-09-08 00:50:10.632858 | orchestrator | 2025-09-08 00:50:10.632869 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-08 00:50:10.632879 | orchestrator | Monday 08 September 2025 00:49:44 +0000 (0:00:00.065) 0:01:58.272 ****** 2025-09-08 00:50:10.632890 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.632901 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.632911 | orchestrator | 2025-09-08 00:50:10.632928 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-08 00:50:10.632939 | orchestrator | Monday 08 September 2025 00:49:50 +0000 (0:00:06.192) 0:02:04.464 ****** 2025-09-08 00:50:10.632950 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.632960 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.632971 | orchestrator | 2025-09-08 00:50:10.632981 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-08 00:50:10.632992 | orchestrator | Monday 08 September 2025 00:49:56 +0000 (0:00:06.115) 0:02:10.580 ****** 2025-09-08 00:50:10.633009 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:50:10.633020 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:50:10.633031 | orchestrator | 2025-09-08 00:50:10.633042 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-08 00:50:10.633052 | orchestrator | Monday 08 September 2025 00:50:03 +0000 (0:00:06.664) 0:02:17.245 ****** 2025-09-08 00:50:10.633063 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:50:10.633073 | orchestrator | 2025-09-08 00:50:10.633084 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-08 00:50:10.633095 | orchestrator | Monday 08 September 2025 00:50:03 +0000 (0:00:00.149) 0:02:17.394 ****** 2025-09-08 00:50:10.633105 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.633116 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.633127 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.633137 | orchestrator | 2025-09-08 00:50:10.633148 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-08 00:50:10.633159 | orchestrator | Monday 08 September 2025 00:50:03 +0000 (0:00:00.791) 0:02:18.186 ****** 2025-09-08 00:50:10.633170 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.633180 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.633191 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.633201 | orchestrator | 2025-09-08 00:50:10.633212 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-08 00:50:10.633223 | orchestrator | Monday 08 September 2025 00:50:04 +0000 (0:00:00.625) 0:02:18.811 ****** 2025-09-08 00:50:10.633233 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.633244 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.633255 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.633265 | orchestrator | 2025-09-08 00:50:10.633276 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-08 00:50:10.633287 | orchestrator | Monday 08 September 2025 00:50:05 +0000 (0:00:00.797) 0:02:19.608 ****** 2025-09-08 00:50:10.633298 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:50:10.633308 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:50:10.633319 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:50:10.633330 | orchestrator | 2025-09-08 00:50:10.633341 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-08 00:50:10.633351 | orchestrator | Monday 08 September 2025 00:50:06 +0000 (0:00:00.909) 0:02:20.517 ****** 2025-09-08 00:50:10.633362 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.633373 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.633384 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.633394 | orchestrator | 2025-09-08 00:50:10.633405 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-08 00:50:10.633420 | orchestrator | Monday 08 September 2025 00:50:07 +0000 (0:00:00.795) 0:02:21.313 ****** 2025-09-08 00:50:10.633431 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:50:10.633442 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:50:10.633453 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:50:10.633463 | orchestrator | 2025-09-08 00:50:10.633474 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:50:10.633486 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-08 00:50:10.633497 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-08 00:50:10.633508 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-08 00:50:10.633519 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:50:10.633530 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:50:10.633546 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:50:10.633557 | orchestrator | 2025-09-08 00:50:10.633568 | orchestrator | 2025-09-08 00:50:10.633633 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:50:10.633644 | orchestrator | Monday 08 September 2025 00:50:07 +0000 (0:00:00.852) 0:02:22.166 ****** 2025-09-08 00:50:10.633655 | orchestrator | =============================================================================== 2025-09-08 00:50:10.633665 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.70s 2025-09-08 00:50:10.633676 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.20s 2025-09-08 00:50:10.633687 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.72s 2025-09-08 00:50:10.633697 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.34s 2025-09-08 00:50:10.633708 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.68s 2025-09-08 00:50:10.633719 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.98s 2025-09-08 00:50:10.633729 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2025-09-08 00:50:10.633746 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.88s 2025-09-08 00:50:10.633757 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.79s 2025-09-08 00:50:10.633768 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.00s 2025-09-08 00:50:10.633778 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.73s 2025-09-08 00:50:10.633789 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.65s 2025-09-08 00:50:10.633800 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.64s 2025-09-08 00:50:10.633811 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.59s 2025-09-08 00:50:10.633821 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.57s 2025-09-08 00:50:10.633832 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-09-08 00:50:10.633841 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2025-09-08 00:50:10.633851 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.45s 2025-09-08 00:50:10.633860 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.41s 2025-09-08 00:50:10.633870 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.40s 2025-09-08 00:50:10.633879 | orchestrator | 2025-09-08 00:50:10 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:10.633889 | orchestrator | 2025-09-08 00:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:13.682361 | orchestrator | 2025-09-08 00:50:13 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:13.683680 | orchestrator | 2025-09-08 00:50:13 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:13.683948 | orchestrator | 2025-09-08 00:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:16.725752 | orchestrator | 2025-09-08 00:50:16 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:16.727783 | orchestrator | 2025-09-08 00:50:16 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:16.728105 | orchestrator | 2025-09-08 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:19.780413 | orchestrator | 2025-09-08 00:50:19 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:19.782545 | orchestrator | 2025-09-08 00:50:19 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:19.782824 | orchestrator | 2025-09-08 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:22.822070 | orchestrator | 2025-09-08 00:50:22 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:22.822306 | orchestrator | 2025-09-08 00:50:22 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:22.822413 | orchestrator | 2025-09-08 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:25.870596 | orchestrator | 2025-09-08 00:50:25 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:25.870956 | orchestrator | 2025-09-08 00:50:25 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:25.870987 | orchestrator | 2025-09-08 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:28.910995 | orchestrator | 2025-09-08 00:50:28 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:28.912897 | orchestrator | 2025-09-08 00:50:28 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:28.912923 | orchestrator | 2025-09-08 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:31.959131 | orchestrator | 2025-09-08 00:50:31 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:31.963411 | orchestrator | 2025-09-08 00:50:31 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:31.963442 | orchestrator | 2025-09-08 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:34.998375 | orchestrator | 2025-09-08 00:50:34 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:34.998504 | orchestrator | 2025-09-08 00:50:34 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:34.998519 | orchestrator | 2025-09-08 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:38.049991 | orchestrator | 2025-09-08 00:50:38 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:38.051724 | orchestrator | 2025-09-08 00:50:38 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:38.051757 | orchestrator | 2025-09-08 00:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:41.103501 | orchestrator | 2025-09-08 00:50:41 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:41.105439 | orchestrator | 2025-09-08 00:50:41 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:41.105470 | orchestrator | 2025-09-08 00:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:44.153484 | orchestrator | 2025-09-08 00:50:44 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:44.153683 | orchestrator | 2025-09-08 00:50:44 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:44.153708 | orchestrator | 2025-09-08 00:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:47.189988 | orchestrator | 2025-09-08 00:50:47 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:47.190177 | orchestrator | 2025-09-08 00:50:47 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:47.190253 | orchestrator | 2025-09-08 00:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:50.231741 | orchestrator | 2025-09-08 00:50:50 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:50.233031 | orchestrator | 2025-09-08 00:50:50 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:50.233252 | orchestrator | 2025-09-08 00:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:53.267731 | orchestrator | 2025-09-08 00:50:53 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:53.269626 | orchestrator | 2025-09-08 00:50:53 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:53.269655 | orchestrator | 2025-09-08 00:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:56.329095 | orchestrator | 2025-09-08 00:50:56 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:56.329736 | orchestrator | 2025-09-08 00:50:56 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:56.329987 | orchestrator | 2025-09-08 00:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:59.367625 | orchestrator | 2025-09-08 00:50:59 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:50:59.368006 | orchestrator | 2025-09-08 00:50:59 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:50:59.368033 | orchestrator | 2025-09-08 00:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:02.413179 | orchestrator | 2025-09-08 00:51:02 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:02.413662 | orchestrator | 2025-09-08 00:51:02 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:02.413780 | orchestrator | 2025-09-08 00:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:05.461419 | orchestrator | 2025-09-08 00:51:05 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:05.461826 | orchestrator | 2025-09-08 00:51:05 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:05.462397 | orchestrator | 2025-09-08 00:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:08.516245 | orchestrator | 2025-09-08 00:51:08 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:08.517638 | orchestrator | 2025-09-08 00:51:08 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:08.517684 | orchestrator | 2025-09-08 00:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:11.575924 | orchestrator | 2025-09-08 00:51:11 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:11.577643 | orchestrator | 2025-09-08 00:51:11 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:11.578114 | orchestrator | 2025-09-08 00:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:14.622179 | orchestrator | 2025-09-08 00:51:14 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:14.625415 | orchestrator | 2025-09-08 00:51:14 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:14.626199 | orchestrator | 2025-09-08 00:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:17.664307 | orchestrator | 2025-09-08 00:51:17 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:17.665009 | orchestrator | 2025-09-08 00:51:17 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:17.665268 | orchestrator | 2025-09-08 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:20.698940 | orchestrator | 2025-09-08 00:51:20 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:20.699521 | orchestrator | 2025-09-08 00:51:20 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:20.699909 | orchestrator | 2025-09-08 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:23.743063 | orchestrator | 2025-09-08 00:51:23 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:23.743358 | orchestrator | 2025-09-08 00:51:23 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:23.743376 | orchestrator | 2025-09-08 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:26.794373 | orchestrator | 2025-09-08 00:51:26 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:26.797959 | orchestrator | 2025-09-08 00:51:26 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:26.797991 | orchestrator | 2025-09-08 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:29.840839 | orchestrator | 2025-09-08 00:51:29 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:29.842695 | orchestrator | 2025-09-08 00:51:29 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:29.843083 | orchestrator | 2025-09-08 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:32.895486 | orchestrator | 2025-09-08 00:51:32 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:32.897196 | orchestrator | 2025-09-08 00:51:32 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:32.897261 | orchestrator | 2025-09-08 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:35.941650 | orchestrator | 2025-09-08 00:51:35 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:35.942661 | orchestrator | 2025-09-08 00:51:35 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:35.942701 | orchestrator | 2025-09-08 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:38.986363 | orchestrator | 2025-09-08 00:51:38 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:38.986839 | orchestrator | 2025-09-08 00:51:38 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:38.986873 | orchestrator | 2025-09-08 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:42.040636 | orchestrator | 2025-09-08 00:51:42 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:42.042555 | orchestrator | 2025-09-08 00:51:42 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:42.042897 | orchestrator | 2025-09-08 00:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:45.107912 | orchestrator | 2025-09-08 00:51:45 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:45.108922 | orchestrator | 2025-09-08 00:51:45 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:45.108951 | orchestrator | 2025-09-08 00:51:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:48.162797 | orchestrator | 2025-09-08 00:51:48 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:48.162936 | orchestrator | 2025-09-08 00:51:48 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:48.162951 | orchestrator | 2025-09-08 00:51:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:51.209844 | orchestrator | 2025-09-08 00:51:51 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:51.211489 | orchestrator | 2025-09-08 00:51:51 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:51.211896 | orchestrator | 2025-09-08 00:51:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:54.263503 | orchestrator | 2025-09-08 00:51:54 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:54.265043 | orchestrator | 2025-09-08 00:51:54 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:54.265286 | orchestrator | 2025-09-08 00:51:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:57.308322 | orchestrator | 2025-09-08 00:51:57 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:51:57.309779 | orchestrator | 2025-09-08 00:51:57 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:51:57.309812 | orchestrator | 2025-09-08 00:51:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:00.348281 | orchestrator | 2025-09-08 00:52:00 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:00.348657 | orchestrator | 2025-09-08 00:52:00 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:00.348687 | orchestrator | 2025-09-08 00:52:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:03.383089 | orchestrator | 2025-09-08 00:52:03 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:03.386381 | orchestrator | 2025-09-08 00:52:03 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:03.386415 | orchestrator | 2025-09-08 00:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:06.432022 | orchestrator | 2025-09-08 00:52:06 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:06.432477 | orchestrator | 2025-09-08 00:52:06 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:06.432510 | orchestrator | 2025-09-08 00:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:09.470337 | orchestrator | 2025-09-08 00:52:09 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:09.470950 | orchestrator | 2025-09-08 00:52:09 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:09.471071 | orchestrator | 2025-09-08 00:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:12.516508 | orchestrator | 2025-09-08 00:52:12 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:12.516779 | orchestrator | 2025-09-08 00:52:12 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:12.516802 | orchestrator | 2025-09-08 00:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:15.566423 | orchestrator | 2025-09-08 00:52:15 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:15.567892 | orchestrator | 2025-09-08 00:52:15 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:15.567938 | orchestrator | 2025-09-08 00:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:18.618613 | orchestrator | 2025-09-08 00:52:18 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:18.620054 | orchestrator | 2025-09-08 00:52:18 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:18.620084 | orchestrator | 2025-09-08 00:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:21.665867 | orchestrator | 2025-09-08 00:52:21 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:21.667024 | orchestrator | 2025-09-08 00:52:21 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:21.667137 | orchestrator | 2025-09-08 00:52:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:24.722650 | orchestrator | 2025-09-08 00:52:24 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:24.724439 | orchestrator | 2025-09-08 00:52:24 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:24.724467 | orchestrator | 2025-09-08 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:27.768290 | orchestrator | 2025-09-08 00:52:27 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:27.769085 | orchestrator | 2025-09-08 00:52:27 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:27.769116 | orchestrator | 2025-09-08 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:30.819453 | orchestrator | 2025-09-08 00:52:30 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:30.820600 | orchestrator | 2025-09-08 00:52:30 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:30.820632 | orchestrator | 2025-09-08 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:33.861426 | orchestrator | 2025-09-08 00:52:33 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:33.862607 | orchestrator | 2025-09-08 00:52:33 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:33.862641 | orchestrator | 2025-09-08 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:36.914221 | orchestrator | 2025-09-08 00:52:36 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:36.917406 | orchestrator | 2025-09-08 00:52:36 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:36.918109 | orchestrator | 2025-09-08 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:39.968569 | orchestrator | 2025-09-08 00:52:39 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:39.970578 | orchestrator | 2025-09-08 00:52:39 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:39.972681 | orchestrator | 2025-09-08 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:43.020648 | orchestrator | 2025-09-08 00:52:43 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:43.021761 | orchestrator | 2025-09-08 00:52:43 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:43.021791 | orchestrator | 2025-09-08 00:52:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:46.077875 | orchestrator | 2025-09-08 00:52:46 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:46.077986 | orchestrator | 2025-09-08 00:52:46 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:46.078088 | orchestrator | 2025-09-08 00:52:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:49.121162 | orchestrator | 2025-09-08 00:52:49 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:49.121269 | orchestrator | 2025-09-08 00:52:49 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:49.121285 | orchestrator | 2025-09-08 00:52:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:52.168360 | orchestrator | 2025-09-08 00:52:52 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:52.170012 | orchestrator | 2025-09-08 00:52:52 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:52.170097 | orchestrator | 2025-09-08 00:52:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:55.214660 | orchestrator | 2025-09-08 00:52:55 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:55.217325 | orchestrator | 2025-09-08 00:52:55 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:55.217356 | orchestrator | 2025-09-08 00:52:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:58.251615 | orchestrator | 2025-09-08 00:52:58 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:52:58.253136 | orchestrator | 2025-09-08 00:52:58 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state STARTED 2025-09-08 00:52:58.253464 | orchestrator | 2025-09-08 00:52:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:01.302909 | orchestrator | 2025-09-08 00:53:01 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:01.303859 | orchestrator | 2025-09-08 00:53:01 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:01.308871 | orchestrator | 2025-09-08 00:53:01 | INFO  | Task c1dcd458-18b8-43d7-8322-50ebf0ca0297 is in state SUCCESS 2025-09-08 00:53:01.311449 | orchestrator | 2025-09-08 00:53:01.311588 | orchestrator | 2025-09-08 00:53:01.311636 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:53:01.311651 | orchestrator | 2025-09-08 00:53:01.311662 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:53:01.311675 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.323) 0:00:00.323 ****** 2025-09-08 00:53:01.311786 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.311799 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.311810 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.311821 | orchestrator | 2025-09-08 00:53:01.311858 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:53:01.311869 | orchestrator | Monday 08 September 2025 00:46:30 +0000 (0:00:00.488) 0:00:00.811 ****** 2025-09-08 00:53:01.311881 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-08 00:53:01.311892 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-08 00:53:01.311903 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-08 00:53:01.311914 | orchestrator | 2025-09-08 00:53:01.311925 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-08 00:53:01.311936 | orchestrator | 2025-09-08 00:53:01.311947 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-08 00:53:01.311957 | orchestrator | Monday 08 September 2025 00:46:30 +0000 (0:00:00.576) 0:00:01.388 ****** 2025-09-08 00:53:01.311969 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.311980 | orchestrator | 2025-09-08 00:53:01.311991 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-08 00:53:01.312025 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:00:00.703) 0:00:02.091 ****** 2025-09-08 00:53:01.312036 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.312049 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.312061 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.312073 | orchestrator | 2025-09-08 00:53:01.312086 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-08 00:53:01.312100 | orchestrator | Monday 08 September 2025 00:46:32 +0000 (0:00:01.124) 0:00:03.216 ****** 2025-09-08 00:53:01.312113 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.312153 | orchestrator | 2025-09-08 00:53:01.312166 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-08 00:53:01.312179 | orchestrator | Monday 08 September 2025 00:46:33 +0000 (0:00:01.379) 0:00:04.595 ****** 2025-09-08 00:53:01.312191 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.312233 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.312246 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.312259 | orchestrator | 2025-09-08 00:53:01.312272 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-08 00:53:01.312285 | orchestrator | Monday 08 September 2025 00:46:34 +0000 (0:00:01.095) 0:00:05.691 ****** 2025-09-08 00:53:01.312364 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:53:01.312378 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:53:01.312391 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:53:01.312403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:53:01.312414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:53:01.312467 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:53:01.312481 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-08 00:53:01.312492 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-08 00:53:01.312523 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-08 00:53:01.312535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-08 00:53:01.312546 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-08 00:53:01.312556 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-08 00:53:01.312567 | orchestrator | 2025-09-08 00:53:01.312578 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-08 00:53:01.312588 | orchestrator | Monday 08 September 2025 00:46:37 +0000 (0:00:02.651) 0:00:08.342 ****** 2025-09-08 00:53:01.312599 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-08 00:53:01.312610 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-08 00:53:01.312621 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-08 00:53:01.312632 | orchestrator | 2025-09-08 00:53:01.312643 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-08 00:53:01.312654 | orchestrator | Monday 08 September 2025 00:46:38 +0000 (0:00:01.158) 0:00:09.504 ****** 2025-09-08 00:53:01.312664 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-08 00:53:01.312676 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-08 00:53:01.312687 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-08 00:53:01.312697 | orchestrator | 2025-09-08 00:53:01.312708 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-08 00:53:01.312719 | orchestrator | Monday 08 September 2025 00:46:40 +0000 (0:00:02.208) 0:00:11.712 ****** 2025-09-08 00:53:01.312739 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-08 00:53:01.312750 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.312774 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-08 00:53:01.312786 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.312797 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-08 00:53:01.312807 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.312818 | orchestrator | 2025-09-08 00:53:01.312829 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-08 00:53:01.312840 | orchestrator | Monday 08 September 2025 00:46:43 +0000 (0:00:02.128) 0:00:13.840 ****** 2025-09-08 00:53:01.312854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.312871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.312883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.312899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.312912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.312931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.312950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.312962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.312974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.312985 | orchestrator | 2025-09-08 00:53:01.313126 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-08 00:53:01.313137 | orchestrator | Monday 08 September 2025 00:46:46 +0000 (0:00:03.685) 0:00:17.526 ****** 2025-09-08 00:53:01.313148 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.313159 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.313170 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.313181 | orchestrator | 2025-09-08 00:53:01.313192 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-08 00:53:01.313203 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:01.679) 0:00:19.206 ****** 2025-09-08 00:53:01.313213 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-08 00:53:01.313224 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-08 00:53:01.313235 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-08 00:53:01.313246 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-08 00:53:01.313257 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-08 00:53:01.313267 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-08 00:53:01.313278 | orchestrator | 2025-09-08 00:53:01.313289 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-08 00:53:01.313300 | orchestrator | Monday 08 September 2025 00:46:50 +0000 (0:00:02.263) 0:00:21.469 ****** 2025-09-08 00:53:01.313315 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.313326 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.313337 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.313348 | orchestrator | 2025-09-08 00:53:01.313358 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-08 00:53:01.313369 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.984) 0:00:22.454 ****** 2025-09-08 00:53:01.313380 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.313398 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.313408 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.313419 | orchestrator | 2025-09-08 00:53:01.313430 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-08 00:53:01.313441 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:01.367) 0:00:23.822 ****** 2025-09-08 00:53:01.313452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.313472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.313485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.313497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:53:01.313531 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.313543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.313555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.313573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.313585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:53:01.313596 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.313642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.313656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.313709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.313722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:53:01.313744 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.313756 | orchestrator | 2025-09-08 00:53:01.313766 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-08 00:53:01.313782 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.817) 0:00:24.639 ****** 2025-09-08 00:53:01.313793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.313805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.313825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.313837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.313848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.313859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:53:01.313881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.313893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.313904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:53:01.313921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.313933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.313944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf', '__omit_place_holder__a9c0c560f3332f2eb5856dfd3397e3767f55c6cf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:53:01.313961 | orchestrator | 2025-09-08 00:53:01.313972 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-08 00:53:01.314135 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:04.564) 0:00:29.203 ****** 2025-09-08 00:53:01.314159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.314172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.314184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.314206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.314217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.314229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.314248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.314264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.314276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.314287 | orchestrator | 2025-09-08 00:53:01.314298 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-08 00:53:01.314309 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:03.226) 0:00:32.430 ****** 2025-09-08 00:53:01.314320 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-08 00:53:01.314330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-08 00:53:01.314341 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-08 00:53:01.314352 | orchestrator | 2025-09-08 00:53:01.314443 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-08 00:53:01.314457 | orchestrator | Monday 08 September 2025 00:47:04 +0000 (0:00:03.234) 0:00:35.665 ****** 2025-09-08 00:53:01.314468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-08 00:53:01.314479 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-08 00:53:01.314490 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-08 00:53:01.314501 | orchestrator | 2025-09-08 00:53:01.314583 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-08 00:53:01.314595 | orchestrator | Monday 08 September 2025 00:47:11 +0000 (0:00:06.615) 0:00:42.281 ****** 2025-09-08 00:53:01.314606 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.314617 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.314628 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.314639 | orchestrator | 2025-09-08 00:53:01.314650 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-08 00:53:01.314720 | orchestrator | Monday 08 September 2025 00:47:12 +0000 (0:00:01.196) 0:00:43.477 ****** 2025-09-08 00:53:01.314733 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-08 00:53:01.314745 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-08 00:53:01.314764 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-08 00:53:01.314775 | orchestrator | 2025-09-08 00:53:01.314785 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-08 00:53:01.314796 | orchestrator | Monday 08 September 2025 00:47:16 +0000 (0:00:03.431) 0:00:46.908 ****** 2025-09-08 00:53:01.314807 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-08 00:53:01.314818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-08 00:53:01.314829 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-08 00:53:01.314839 | orchestrator | 2025-09-08 00:53:01.314850 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-08 00:53:01.314861 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:04.254) 0:00:51.163 ****** 2025-09-08 00:53:01.314872 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-08 00:53:01.314883 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-08 00:53:01.314919 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-08 00:53:01.314953 | orchestrator | 2025-09-08 00:53:01.314965 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-08 00:53:01.314975 | orchestrator | Monday 08 September 2025 00:47:22 +0000 (0:00:02.084) 0:00:53.247 ****** 2025-09-08 00:53:01.314986 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-08 00:53:01.314997 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-08 00:53:01.315037 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-08 00:53:01.315048 | orchestrator | 2025-09-08 00:53:01.315093 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-08 00:53:01.315105 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:01.868) 0:00:55.115 ****** 2025-09-08 00:53:01.315116 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.315127 | orchestrator | 2025-09-08 00:53:01.315138 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-08 00:53:01.315154 | orchestrator | Monday 08 September 2025 00:47:25 +0000 (0:00:00.859) 0:00:55.975 ****** 2025-09-08 00:53:01.315166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.315178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.315196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.315215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.315227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.315238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.315254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.315266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.315277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.315288 | orchestrator | 2025-09-08 00:53:01.315299 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-08 00:53:01.315379 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:04.489) 0:01:00.465 ****** 2025-09-08 00:53:01.315399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315462 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.315474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315484 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.315496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315564 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.315575 | orchestrator | 2025-09-08 00:53:01.315586 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-08 00:53:01.315596 | orchestrator | Monday 08 September 2025 00:47:30 +0000 (0:00:00.666) 0:01:01.132 ****** 2025-09-08 00:53:01.315608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315664 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.315681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315704 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.315715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315754 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.315765 | orchestrator | 2025-09-08 00:53:01.315775 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-08 00:53:01.315787 | orchestrator | Monday 08 September 2025 00:47:31 +0000 (0:00:00.918) 0:01:02.050 ****** 2025-09-08 00:53:01.315873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315917 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.315928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.315939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.315950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.315961 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.315978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316075 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.316088 | orchestrator | 2025-09-08 00:53:01.316099 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-08 00:53:01.316110 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:01.134) 0:01:03.184 ****** 2025-09-08 00:53:01.316121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316155 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.316171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316212 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.316230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316264 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.316274 | orchestrator | 2025-09-08 00:53:01.316285 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-08 00:53:01.316296 | orchestrator | Monday 08 September 2025 00:47:33 +0000 (0:00:01.562) 0:01:04.746 ****** 2025-09-08 00:53:01.316308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316383 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.316394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316405 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.316416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316461 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.316471 | orchestrator | 2025-09-08 00:53:01.316482 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-08 00:53:01.316493 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:01.714) 0:01:06.460 ****** 2025-09-08 00:53:01.316579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316622 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.316633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316716 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.316728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316768 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.316779 | orchestrator | 2025-09-08 00:53:01.316789 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-08 00:53:01.316861 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.755) 0:01:07.215 ****** 2025-09-08 00:53:01.316874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316920 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.316931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.316942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.316962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.316974 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.316985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.317003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.317014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.317025 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.317035 | orchestrator | 2025-09-08 00:53:01.317046 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-08 00:53:01.317057 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.550) 0:01:07.765 ****** 2025-09-08 00:53:01.317073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.317085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.317096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.317108 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.317126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.317145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.317157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.317168 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.317179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:53:01.317194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:53:01.317206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:53:01.317217 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.317227 | orchestrator | 2025-09-08 00:53:01.317238 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-08 00:53:01.317249 | orchestrator | Monday 08 September 2025 00:47:37 +0000 (0:00:00.733) 0:01:08.499 ****** 2025-09-08 00:53:01.317260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-08 00:53:01.317271 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-08 00:53:01.317324 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-08 00:53:01.317337 | orchestrator | 2025-09-08 00:53:01.317348 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-08 00:53:01.317371 | orchestrator | Monday 08 September 2025 00:47:39 +0000 (0:00:01.692) 0:01:10.192 ****** 2025-09-08 00:53:01.317381 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-08 00:53:01.317392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-08 00:53:01.317403 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-08 00:53:01.317414 | orchestrator | 2025-09-08 00:53:01.317424 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-08 00:53:01.317435 | orchestrator | Monday 08 September 2025 00:47:40 +0000 (0:00:01.426) 0:01:11.619 ****** 2025-09-08 00:53:01.317446 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 00:53:01.317498 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 00:53:01.317564 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 00:53:01.317575 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 00:53:01.317586 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.317597 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 00:53:01.317607 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.317618 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 00:53:01.317629 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.317639 | orchestrator | 2025-09-08 00:53:01.317650 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-08 00:53:01.317661 | orchestrator | Monday 08 September 2025 00:47:41 +0000 (0:00:00.901) 0:01:12.520 ****** 2025-09-08 00:53:01.317672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.317684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.317695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:53:01.317722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.317770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.317801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:53:01.317813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.317870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.317887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:53:01.317898 | orchestrator | 2025-09-08 00:53:01.317909 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-08 00:53:01.317920 | orchestrator | Monday 08 September 2025 00:47:44 +0000 (0:00:02.645) 0:01:15.165 ****** 2025-09-08 00:53:01.317932 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.317943 | orchestrator | 2025-09-08 00:53:01.317953 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-08 00:53:01.317972 | orchestrator | Monday 08 September 2025 00:47:45 +0000 (0:00:00.798) 0:01:15.964 ****** 2025-09-08 00:53:01.317985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-08 00:53:01.318006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.318167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-08 00:53:01.318182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.318208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-08 00:53:01.318265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.318275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318299 | orchestrator | 2025-09-08 00:53:01.318314 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-08 00:53:01.318324 | orchestrator | Monday 08 September 2025 00:47:49 +0000 (0:00:04.464) 0:01:20.428 ****** 2025-09-08 00:53:01.318335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-08 00:53:01.318352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.318362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318382 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.318392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-08 00:53:01.318406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.318423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318443 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.318460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-08 00:53:01.318470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.318480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318562 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.318573 | orchestrator | 2025-09-08 00:53:01.318583 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-08 00:53:01.318593 | orchestrator | Monday 08 September 2025 00:47:50 +0000 (0:00:01.146) 0:01:21.575 ****** 2025-09-08 00:53:01.318603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:53:01.318614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:53:01.318624 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.318634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:53:01.318644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:53:01.318654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:53:01.318669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:53:01.318678 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.318688 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.318697 | orchestrator | 2025-09-08 00:53:01.318713 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-08 00:53:01.318723 | orchestrator | Monday 08 September 2025 00:47:52 +0000 (0:00:01.432) 0:01:23.008 ****** 2025-09-08 00:53:01.318733 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.318742 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.318752 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.318762 | orchestrator | 2025-09-08 00:53:01.318771 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-08 00:53:01.318781 | orchestrator | Monday 08 September 2025 00:47:53 +0000 (0:00:01.442) 0:01:24.451 ****** 2025-09-08 00:53:01.318791 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.318801 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.318810 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.318820 | orchestrator | 2025-09-08 00:53:01.318829 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-08 00:53:01.318839 | orchestrator | Monday 08 September 2025 00:47:55 +0000 (0:00:02.136) 0:01:26.587 ****** 2025-09-08 00:53:01.318849 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.318858 | orchestrator | 2025-09-08 00:53:01.318866 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-08 00:53:01.318874 | orchestrator | Monday 08 September 2025 00:47:56 +0000 (0:00:00.979) 0:01:27.567 ****** 2025-09-08 00:53:01.318883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.318901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.318919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.318992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.319003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319049 | orchestrator | 2025-09-08 00:53:01.319057 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-08 00:53:01.319065 | orchestrator | Monday 08 September 2025 00:48:00 +0000 (0:00:03.875) 0:01:31.443 ****** 2025-09-08 00:53:01.319079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.319087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319108 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.319120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.319128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319144 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.319157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.319165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319190 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.319198 | orchestrator | 2025-09-08 00:53:01.319206 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-08 00:53:01.319213 | orchestrator | Monday 08 September 2025 00:48:02 +0000 (0:00:01.873) 0:01:33.316 ****** 2025-09-08 00:53:01.319222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:53:01.319234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:53:01.319243 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.319251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:53:01.319259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:53:01.319267 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.319275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:53:01.319283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:53:01.319290 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.319298 | orchestrator | 2025-09-08 00:53:01.319306 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-08 00:53:01.319314 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:01.178) 0:01:34.494 ****** 2025-09-08 00:53:01.319321 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.319329 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.319337 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.319344 | orchestrator | 2025-09-08 00:53:01.319352 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-08 00:53:01.319360 | orchestrator | Monday 08 September 2025 00:48:04 +0000 (0:00:01.291) 0:01:35.786 ****** 2025-09-08 00:53:01.319368 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.319375 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.319383 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.319396 | orchestrator | 2025-09-08 00:53:01.319408 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-08 00:53:01.319416 | orchestrator | Monday 08 September 2025 00:48:07 +0000 (0:00:02.039) 0:01:37.826 ****** 2025-09-08 00:53:01.319424 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.319431 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.319439 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.319447 | orchestrator | 2025-09-08 00:53:01.319454 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-08 00:53:01.319462 | orchestrator | Monday 08 September 2025 00:48:07 +0000 (0:00:00.291) 0:01:38.117 ****** 2025-09-08 00:53:01.319470 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.319477 | orchestrator | 2025-09-08 00:53:01.319485 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-08 00:53:01.319493 | orchestrator | Monday 08 September 2025 00:48:08 +0000 (0:00:00.864) 0:01:38.982 ****** 2025-09-08 00:53:01.319501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-08 00:53:01.319523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-08 00:53:01.319536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-08 00:53:01.319544 | orchestrator | 2025-09-08 00:53:01.319552 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-08 00:53:01.319560 | orchestrator | Monday 08 September 2025 00:48:10 +0000 (0:00:02.562) 0:01:41.544 ****** 2025-09-08 00:53:01.319572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-08 00:53:01.319586 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.319594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-08 00:53:01.319602 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.319610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-08 00:53:01.319619 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.319703 | orchestrator | 2025-09-08 00:53:01.319712 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-08 00:53:01.319720 | orchestrator | Monday 08 September 2025 00:48:12 +0000 (0:00:01.551) 0:01:43.096 ****** 2025-09-08 00:53:01.319729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:53:01.319742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:53:01.319752 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.319760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:53:01.319776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:53:01.319784 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.319796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:53:01.319805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:53:01.319813 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.319821 | orchestrator | 2025-09-08 00:53:01.319829 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-08 00:53:01.319837 | orchestrator | Monday 08 September 2025 00:48:13 +0000 (0:00:01.703) 0:01:44.799 ****** 2025-09-08 00:53:01.319844 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.319852 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.319860 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.319868 | orchestrator | 2025-09-08 00:53:01.319876 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-08 00:53:01.319884 | orchestrator | Monday 08 September 2025 00:48:14 +0000 (0:00:00.694) 0:01:45.494 ****** 2025-09-08 00:53:01.319891 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.319899 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.319907 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.319915 | orchestrator | 2025-09-08 00:53:01.319923 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-08 00:53:01.319930 | orchestrator | Monday 08 September 2025 00:48:15 +0000 (0:00:01.313) 0:01:46.807 ****** 2025-09-08 00:53:01.319938 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.319946 | orchestrator | 2025-09-08 00:53:01.319954 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-08 00:53:01.319962 | orchestrator | Monday 08 September 2025 00:48:16 +0000 (0:00:00.745) 0:01:47.553 ****** 2025-09-08 00:53:01.319970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.319987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.319996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.320027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.320073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320105 | orchestrator | 2025-09-08 00:53:01.320116 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-08 00:53:01.320125 | orchestrator | Monday 08 September 2025 00:48:20 +0000 (0:00:03.749) 0:01:51.302 ****** 2025-09-08 00:53:01.320133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.320142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.320181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320194 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.320206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320236 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.320244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.320252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320285 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.320293 | orchestrator | 2025-09-08 00:53:01.320301 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-08 00:53:01.320309 | orchestrator | Monday 08 September 2025 00:48:21 +0000 (0:00:00.990) 0:01:52.293 ****** 2025-09-08 00:53:01.320317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:53:01.320330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:53:01.320338 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.320346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:53:01.320354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:53:01.320362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:53:01.320370 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.320378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:53:01.320386 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.320394 | orchestrator | 2025-09-08 00:53:01.320402 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-08 00:53:01.320415 | orchestrator | Monday 08 September 2025 00:48:22 +0000 (0:00:01.215) 0:01:53.508 ****** 2025-09-08 00:53:01.320423 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.320431 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.320439 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.320446 | orchestrator | 2025-09-08 00:53:01.320454 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-08 00:53:01.320462 | orchestrator | Monday 08 September 2025 00:48:24 +0000 (0:00:01.510) 0:01:55.019 ****** 2025-09-08 00:53:01.320564 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.320574 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.320582 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.320590 | orchestrator | 2025-09-08 00:53:01.320598 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-08 00:53:01.320606 | orchestrator | Monday 08 September 2025 00:48:26 +0000 (0:00:02.614) 0:01:57.633 ****** 2025-09-08 00:53:01.320613 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.320621 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.320629 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.320637 | orchestrator | 2025-09-08 00:53:01.320645 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-08 00:53:01.320653 | orchestrator | Monday 08 September 2025 00:48:27 +0000 (0:00:00.813) 0:01:58.447 ****** 2025-09-08 00:53:01.320661 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.320668 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.320676 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.320684 | orchestrator | 2025-09-08 00:53:01.320692 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-08 00:53:01.320700 | orchestrator | Monday 08 September 2025 00:48:28 +0000 (0:00:00.500) 0:01:58.948 ****** 2025-09-08 00:53:01.320708 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.320715 | orchestrator | 2025-09-08 00:53:01.320736 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-08 00:53:01.320745 | orchestrator | Monday 08 September 2025 00:48:29 +0000 (0:00:00.896) 0:01:59.844 ****** 2025-09-08 00:53:01.320753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 00:53:01.320767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:53:01.320776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 00:53:01.320791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:53:01.320812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 00:53:01.320917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:53:01.320926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.320970 | orchestrator | 2025-09-08 00:53:01.320978 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-08 00:53:01.320991 | orchestrator | Monday 08 September 2025 00:48:33 +0000 (0:00:04.409) 0:02:04.253 ****** 2025-09-08 00:53:01.321005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 00:53:01.321013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:53:01.321022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321124 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.321132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 00:53:01.321141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:53:01.321149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321269 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.321278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 00:53:01.321286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:53:01.321298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.321349 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.321357 | orchestrator | 2025-09-08 00:53:01.321365 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-08 00:53:01.321373 | orchestrator | Monday 08 September 2025 00:48:34 +0000 (0:00:00.852) 0:02:05.106 ****** 2025-09-08 00:53:01.321382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:53:01.321390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:53:01.321399 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.321407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:53:01.321415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:53:01.321423 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.321431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:53:01.321443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:53:01.321451 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.321459 | orchestrator | 2025-09-08 00:53:01.321467 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-08 00:53:01.321480 | orchestrator | Monday 08 September 2025 00:48:35 +0000 (0:00:01.042) 0:02:06.148 ****** 2025-09-08 00:53:01.321488 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.321496 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.321549 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.321558 | orchestrator | 2025-09-08 00:53:01.321566 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-08 00:53:01.321574 | orchestrator | Monday 08 September 2025 00:48:37 +0000 (0:00:01.760) 0:02:07.909 ****** 2025-09-08 00:53:01.321582 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.321590 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.321598 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.321606 | orchestrator | 2025-09-08 00:53:01.321614 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-08 00:53:01.321621 | orchestrator | Monday 08 September 2025 00:48:38 +0000 (0:00:01.824) 0:02:09.734 ****** 2025-09-08 00:53:01.321629 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.321637 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.321645 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.321653 | orchestrator | 2025-09-08 00:53:01.321661 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-08 00:53:01.321667 | orchestrator | Monday 08 September 2025 00:48:39 +0000 (0:00:00.529) 0:02:10.263 ****** 2025-09-08 00:53:01.321674 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.321681 | orchestrator | 2025-09-08 00:53:01.321687 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-08 00:53:01.321694 | orchestrator | Monday 08 September 2025 00:48:40 +0000 (0:00:00.817) 0:02:11.080 ****** 2025-09-08 00:53:01.321709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 00:53:01.321722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.321750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 00:53:01.321763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.321780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 00:53:01.321789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.321801 | orchestrator | 2025-09-08 00:53:01.321808 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-08 00:53:01.321815 | orchestrator | Monday 08 September 2025 00:48:44 +0000 (0:00:04.190) 0:02:15.271 ****** 2025-09-08 00:53:01.321830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 00:53:01.321839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.321851 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.321864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 00:53:01.321895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.321902 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.321917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 00:53:01.321931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.321938 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.321945 | orchestrator | 2025-09-08 00:53:01.321952 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-08 00:53:01.321958 | orchestrator | Monday 08 September 2025 00:48:47 +0000 (0:00:03.148) 0:02:18.419 ****** 2025-09-08 00:53:01.321965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:53:01.321977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:53:01.321984 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.321994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:53:01.322001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:53:01.322008 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.322487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:53:01.322533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:53:01.322542 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.322549 | orchestrator | 2025-09-08 00:53:01.322556 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-08 00:53:01.322563 | orchestrator | Monday 08 September 2025 00:48:50 +0000 (0:00:03.150) 0:02:21.570 ****** 2025-09-08 00:53:01.322570 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.322576 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.322583 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.322589 | orchestrator | 2025-09-08 00:53:01.322596 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-08 00:53:01.322603 | orchestrator | Monday 08 September 2025 00:48:52 +0000 (0:00:01.353) 0:02:22.923 ****** 2025-09-08 00:53:01.322609 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.322625 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.322631 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.322638 | orchestrator | 2025-09-08 00:53:01.322644 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-08 00:53:01.322651 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:02.083) 0:02:25.007 ****** 2025-09-08 00:53:01.322658 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.322664 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.322671 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.322677 | orchestrator | 2025-09-08 00:53:01.322684 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-08 00:53:01.322690 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:00.578) 0:02:25.585 ****** 2025-09-08 00:53:01.322697 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.322703 | orchestrator | 2025-09-08 00:53:01.322710 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-08 00:53:01.322732 | orchestrator | Monday 08 September 2025 00:48:55 +0000 (0:00:00.897) 0:02:26.483 ****** 2025-09-08 00:53:01.322740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 00:53:01.322753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 00:53:01.322761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 00:53:01.322768 | orchestrator | 2025-09-08 00:53:01.322775 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-08 00:53:01.322781 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:03.532) 0:02:30.015 ****** 2025-09-08 00:53:01.322794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 00:53:01.322806 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.322813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 00:53:01.322840 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.322847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 00:53:01.322854 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.322861 | orchestrator | 2025-09-08 00:53:01.322897 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-08 00:53:01.322904 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.629) 0:02:30.645 ****** 2025-09-08 00:53:01.322932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:53:01.322958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:53:01.322966 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.322977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:53:01.322984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:53:01.322991 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.322997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:53:01.323004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:53:01.323011 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.323017 | orchestrator | 2025-09-08 00:53:01.323024 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-08 00:53:01.323031 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:00.664) 0:02:31.309 ****** 2025-09-08 00:53:01.323038 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.323044 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.323051 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.323057 | orchestrator | 2025-09-08 00:53:01.323069 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-08 00:53:01.323106 | orchestrator | Monday 08 September 2025 00:49:01 +0000 (0:00:01.273) 0:02:32.583 ****** 2025-09-08 00:53:01.323114 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.323122 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.323130 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.323138 | orchestrator | 2025-09-08 00:53:01.323146 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-08 00:53:01.323154 | orchestrator | Monday 08 September 2025 00:49:03 +0000 (0:00:02.129) 0:02:34.713 ****** 2025-09-08 00:53:01.323162 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.323170 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.323182 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.323190 | orchestrator | 2025-09-08 00:53:01.323198 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-08 00:53:01.323207 | orchestrator | Monday 08 September 2025 00:49:04 +0000 (0:00:00.582) 0:02:35.296 ****** 2025-09-08 00:53:01.323215 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.323223 | orchestrator | 2025-09-08 00:53:01.323230 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-08 00:53:01.323238 | orchestrator | Monday 08 September 2025 00:49:05 +0000 (0:00:00.910) 0:02:36.207 ****** 2025-09-08 00:53:01.323253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:53:01.323267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:53:01.323286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:53:01.323295 | orchestrator | 2025-09-08 00:53:01.323303 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-08 00:53:01.323315 | orchestrator | Monday 08 September 2025 00:49:08 +0000 (0:00:03.563) 0:02:39.771 ****** 2025-09-08 00:53:01.323328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:53:01.323338 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.323351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:53:01.323369 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.323383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:53:01.323393 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.323400 | orchestrator | 2025-09-08 00:53:01.323408 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-08 00:53:01.323446 | orchestrator | Monday 08 September 2025 00:49:10 +0000 (0:00:01.329) 0:02:41.101 ****** 2025-09-08 00:53:01.323455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:53:01.323464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:53:01.323475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:53:01.323488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:53:01.323496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:53:01.323516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-08 00:53:01.323523 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.323530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:53:01.323567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:53:01.323578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:53:01.323585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-08 00:53:01.323592 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.323598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:53:01.323605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:53:01.323612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:53:01.323619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:53:01.323626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-08 00:53:01.323650 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.323657 | orchestrator | 2025-09-08 00:53:01.323668 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-08 00:53:01.323675 | orchestrator | Monday 08 September 2025 00:49:11 +0000 (0:00:01.024) 0:02:42.126 ****** 2025-09-08 00:53:01.323682 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.323688 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.323695 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.323702 | orchestrator | 2025-09-08 00:53:01.323708 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-08 00:53:01.323719 | orchestrator | Monday 08 September 2025 00:49:12 +0000 (0:00:01.357) 0:02:43.484 ****** 2025-09-08 00:53:01.323726 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.323732 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.323739 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.323745 | orchestrator | 2025-09-08 00:53:01.323752 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-08 00:53:01.323759 | orchestrator | Monday 08 September 2025 00:49:14 +0000 (0:00:02.034) 0:02:45.518 ****** 2025-09-08 00:53:01.323765 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.323772 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.323778 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.323785 | orchestrator | 2025-09-08 00:53:01.323791 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-08 00:53:01.323798 | orchestrator | Monday 08 September 2025 00:49:15 +0000 (0:00:00.324) 0:02:45.843 ****** 2025-09-08 00:53:01.323805 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.323811 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.323818 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.323824 | orchestrator | 2025-09-08 00:53:01.323831 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-08 00:53:01.323838 | orchestrator | Monday 08 September 2025 00:49:15 +0000 (0:00:00.593) 0:02:46.437 ****** 2025-09-08 00:53:01.323844 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.323851 | orchestrator | 2025-09-08 00:53:01.323858 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-08 00:53:01.323864 | orchestrator | Monday 08 September 2025 00:49:16 +0000 (0:00:00.995) 0:02:47.432 ****** 2025-09-08 00:53:01.323876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:53:01.323884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:53:01.323892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:53:01.323907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:53:01.323915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:53:01.323922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:53:01.323933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:53:01.323941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:53:01.323952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:53:01.323960 | orchestrator | 2025-09-08 00:53:01.323967 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-08 00:53:01.323973 | orchestrator | Monday 08 September 2025 00:49:19 +0000 (0:00:03.321) 0:02:50.754 ****** 2025-09-08 00:53:01.323984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:53:01.323991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:53:01.324003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:53:01.324010 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.324017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:53:01.324028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:53:01.324039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:53:01.324046 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.324053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:53:01.324064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:53:01.324071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:53:01.324082 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.324089 | orchestrator | 2025-09-08 00:53:01.324096 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-08 00:53:01.324102 | orchestrator | Monday 08 September 2025 00:49:20 +0000 (0:00:00.910) 0:02:51.665 ****** 2025-09-08 00:53:01.324109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:53:01.324117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:53:01.324124 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.324131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:53:01.324138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:53:01.324145 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.324152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:53:01.324162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:53:01.324169 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.324176 | orchestrator | 2025-09-08 00:53:01.324183 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-08 00:53:01.324189 | orchestrator | Monday 08 September 2025 00:49:21 +0000 (0:00:00.815) 0:02:52.480 ****** 2025-09-08 00:53:01.324196 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.324202 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.324209 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.324216 | orchestrator | 2025-09-08 00:53:01.324222 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-08 00:53:01.324229 | orchestrator | Monday 08 September 2025 00:49:23 +0000 (0:00:01.400) 0:02:53.881 ****** 2025-09-08 00:53:01.324236 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.324242 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.324249 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.324255 | orchestrator | 2025-09-08 00:53:01.324262 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-08 00:53:01.324269 | orchestrator | Monday 08 September 2025 00:49:25 +0000 (0:00:02.310) 0:02:56.192 ****** 2025-09-08 00:53:01.324275 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.324282 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.324288 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.324295 | orchestrator | 2025-09-08 00:53:01.324301 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-08 00:53:01.324315 | orchestrator | Monday 08 September 2025 00:49:25 +0000 (0:00:00.606) 0:02:56.799 ****** 2025-09-08 00:53:01.324322 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.324329 | orchestrator | 2025-09-08 00:53:01.324335 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-08 00:53:01.324342 | orchestrator | Monday 08 September 2025 00:49:27 +0000 (0:00:01.018) 0:02:57.817 ****** 2025-09-08 00:53:01.324353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 00:53:01.324362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 00:53:01.324380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 00:53:01.324405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324412 | orchestrator | 2025-09-08 00:53:01.324418 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-08 00:53:01.324425 | orchestrator | Monday 08 September 2025 00:49:31 +0000 (0:00:04.312) 0:03:02.130 ****** 2025-09-08 00:53:01.324432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 00:53:01.324443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324450 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.324457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 00:53:01.324471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324478 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.324486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 00:53:01.324493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324500 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.324549 | orchestrator | 2025-09-08 00:53:01.324556 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-08 00:53:01.324563 | orchestrator | Monday 08 September 2025 00:49:32 +0000 (0:00:01.628) 0:03:03.759 ****** 2025-09-08 00:53:01.324571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:53:01.324578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:53:01.324585 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.324592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:53:01.324612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:53:01.324625 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.324632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:53:01.324639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:53:01.324645 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.324652 | orchestrator | 2025-09-08 00:53:01.324659 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-08 00:53:01.324666 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:00.987) 0:03:04.746 ****** 2025-09-08 00:53:01.324672 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.324679 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.324685 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.324692 | orchestrator | 2025-09-08 00:53:01.324699 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-08 00:53:01.324705 | orchestrator | Monday 08 September 2025 00:49:35 +0000 (0:00:01.378) 0:03:06.124 ****** 2025-09-08 00:53:01.324712 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.324719 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.324725 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.324732 | orchestrator | 2025-09-08 00:53:01.324738 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-08 00:53:01.324745 | orchestrator | Monday 08 September 2025 00:49:37 +0000 (0:00:02.130) 0:03:08.255 ****** 2025-09-08 00:53:01.324756 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.324763 | orchestrator | 2025-09-08 00:53:01.324770 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-08 00:53:01.324777 | orchestrator | Monday 08 September 2025 00:49:38 +0000 (0:00:01.328) 0:03:09.583 ****** 2025-09-08 00:53:01.324784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-08 00:53:01.324792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-08 00:53:01.324832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-08 00:53:01.324839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.324875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325483 | orchestrator | 2025-09-08 00:53:01.325490 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-08 00:53:01.325496 | orchestrator | Monday 08 September 2025 00:49:42 +0000 (0:00:03.468) 0:03:13.051 ****** 2025-09-08 00:53:01.325554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-08 00:53:01.325561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325594 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.325600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-08 00:53:01.325658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325685 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.325692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-08 00:53:01.325702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.325766 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.325772 | orchestrator | 2025-09-08 00:53:01.325778 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-08 00:53:01.325784 | orchestrator | Monday 08 September 2025 00:49:42 +0000 (0:00:00.666) 0:03:13.718 ****** 2025-09-08 00:53:01.325791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:53:01.325798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:53:01.325804 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.325811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:53:01.325817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:53:01.325828 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.325834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:53:01.325840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:53:01.325846 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.325852 | orchestrator | 2025-09-08 00:53:01.325859 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-08 00:53:01.325865 | orchestrator | Monday 08 September 2025 00:49:44 +0000 (0:00:01.432) 0:03:15.151 ****** 2025-09-08 00:53:01.325871 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.325877 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.325883 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.325889 | orchestrator | 2025-09-08 00:53:01.325895 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-08 00:53:01.325901 | orchestrator | Monday 08 September 2025 00:49:45 +0000 (0:00:01.421) 0:03:16.573 ****** 2025-09-08 00:53:01.325907 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.325913 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.325920 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.325926 | orchestrator | 2025-09-08 00:53:01.325949 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-08 00:53:01.325956 | orchestrator | Monday 08 September 2025 00:49:47 +0000 (0:00:02.092) 0:03:18.665 ****** 2025-09-08 00:53:01.325962 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.325968 | orchestrator | 2025-09-08 00:53:01.325978 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-08 00:53:01.325985 | orchestrator | Monday 08 September 2025 00:49:49 +0000 (0:00:01.333) 0:03:19.998 ****** 2025-09-08 00:53:01.325991 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:53:01.325997 | orchestrator | 2025-09-08 00:53:01.326004 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-08 00:53:01.326010 | orchestrator | Monday 08 September 2025 00:49:51 +0000 (0:00:02.484) 0:03:22.482 ****** 2025-09-08 00:53:01.326099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:53:01.326117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:53:01.326124 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.326137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:53:01.326144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:53:01.326151 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.326200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:53:01.326214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:53:01.326221 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.326227 | orchestrator | 2025-09-08 00:53:01.326233 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-08 00:53:01.326240 | orchestrator | Monday 08 September 2025 00:49:54 +0000 (0:00:02.379) 0:03:24.862 ****** 2025-09-08 00:53:01.326250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:53:01.326301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:53:01.326310 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.326317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:53:01.326329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:53:01.326335 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.326382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:53:01.326408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:53:01.326414 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.326421 | orchestrator | 2025-09-08 00:53:01.326427 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-08 00:53:01.326433 | orchestrator | Monday 08 September 2025 00:49:56 +0000 (0:00:02.512) 0:03:27.375 ****** 2025-09-08 00:53:01.326440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:53:01.326450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:53:01.326457 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.326463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:53:01.326470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:53:01.326480 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.326585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:53:01.326598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:53:01.326605 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.326611 | orchestrator | 2025-09-08 00:53:01.326617 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-08 00:53:01.326623 | orchestrator | Monday 08 September 2025 00:49:59 +0000 (0:00:02.972) 0:03:30.347 ****** 2025-09-08 00:53:01.326629 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.326636 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.326642 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.326648 | orchestrator | 2025-09-08 00:53:01.326654 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-08 00:53:01.326660 | orchestrator | Monday 08 September 2025 00:50:01 +0000 (0:00:01.727) 0:03:32.075 ****** 2025-09-08 00:53:01.326673 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.326679 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.326685 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.326691 | orchestrator | 2025-09-08 00:53:01.326696 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-08 00:53:01.326702 | orchestrator | Monday 08 September 2025 00:50:02 +0000 (0:00:01.504) 0:03:33.580 ****** 2025-09-08 00:53:01.326708 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.326714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.326719 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.326725 | orchestrator | 2025-09-08 00:53:01.326731 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-08 00:53:01.326737 | orchestrator | Monday 08 September 2025 00:50:03 +0000 (0:00:00.308) 0:03:33.888 ****** 2025-09-08 00:53:01.326743 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.326748 | orchestrator | 2025-09-08 00:53:01.326754 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-08 00:53:01.326760 | orchestrator | Monday 08 September 2025 00:50:04 +0000 (0:00:01.333) 0:03:35.222 ****** 2025-09-08 00:53:01.326770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-08 00:53:01.326781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-08 00:53:01.326824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-08 00:53:01.326832 | orchestrator | 2025-09-08 00:53:01.326838 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-08 00:53:01.326843 | orchestrator | Monday 08 September 2025 00:50:05 +0000 (0:00:01.426) 0:03:36.649 ****** 2025-09-08 00:53:01.326849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-08 00:53:01.326855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-08 00:53:01.326861 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.326867 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.326876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-08 00:53:01.326886 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.326892 | orchestrator | 2025-09-08 00:53:01.326897 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-08 00:53:01.326903 | orchestrator | Monday 08 September 2025 00:50:06 +0000 (0:00:00.459) 0:03:37.108 ****** 2025-09-08 00:53:01.326909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-08 00:53:01.326916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-08 00:53:01.326922 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.326927 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.326967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-08 00:53:01.326975 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.326980 | orchestrator | 2025-09-08 00:53:01.326986 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-08 00:53:01.326992 | orchestrator | Monday 08 September 2025 00:50:07 +0000 (0:00:00.922) 0:03:38.031 ****** 2025-09-08 00:53:01.326997 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.327003 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.327009 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.327014 | orchestrator | 2025-09-08 00:53:01.327020 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-08 00:53:01.327026 | orchestrator | Monday 08 September 2025 00:50:07 +0000 (0:00:00.477) 0:03:38.509 ****** 2025-09-08 00:53:01.327032 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.327037 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.327043 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.327048 | orchestrator | 2025-09-08 00:53:01.327054 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-08 00:53:01.327060 | orchestrator | Monday 08 September 2025 00:50:09 +0000 (0:00:01.364) 0:03:39.873 ****** 2025-09-08 00:53:01.327065 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.327071 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.327077 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.327082 | orchestrator | 2025-09-08 00:53:01.327088 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-08 00:53:01.327094 | orchestrator | Monday 08 September 2025 00:50:09 +0000 (0:00:00.321) 0:03:40.195 ****** 2025-09-08 00:53:01.327099 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.327113 | orchestrator | 2025-09-08 00:53:01.327119 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-08 00:53:01.327124 | orchestrator | Monday 08 September 2025 00:50:10 +0000 (0:00:01.419) 0:03:41.614 ****** 2025-09-08 00:53:01.327134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 00:53:01.327144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:53:01.327211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 00:53:01.327234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.327297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:53:01.327369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.327429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.327447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 00:53:01.327461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.327527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:53:01.327602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.327641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.327688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.327718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.327739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.327802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.327808 | orchestrator | 2025-09-08 00:53:01.327814 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-08 00:53:01.327820 | orchestrator | Monday 08 September 2025 00:50:14 +0000 (0:00:04.082) 0:03:45.697 ****** 2025-09-08 00:53:01.327826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 00:53:01.327836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 00:53:01.327842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:53:01.327986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:53:01.327992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.327998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.328096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.328110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 00:53:01.328158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.328332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.328341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:53:01.328346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.328396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.328408 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.328418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328435 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.328441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.328489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:53:01.328532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.328555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:53:01.328561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:53:01.328567 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.328572 | orchestrator | 2025-09-08 00:53:01.328578 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-08 00:53:01.328584 | orchestrator | Monday 08 September 2025 00:50:16 +0000 (0:00:01.479) 0:03:47.177 ****** 2025-09-08 00:53:01.328589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:53:01.328595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:53:01.328601 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.328606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:53:01.328619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:53:01.328629 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.328634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:53:01.328643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:53:01.328649 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.328654 | orchestrator | 2025-09-08 00:53:01.328659 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-08 00:53:01.328665 | orchestrator | Monday 08 September 2025 00:50:18 +0000 (0:00:01.946) 0:03:49.124 ****** 2025-09-08 00:53:01.328670 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.328676 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.328681 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.328686 | orchestrator | 2025-09-08 00:53:01.328692 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-08 00:53:01.328697 | orchestrator | Monday 08 September 2025 00:50:19 +0000 (0:00:01.223) 0:03:50.347 ****** 2025-09-08 00:53:01.328703 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.328708 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.328713 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.328719 | orchestrator | 2025-09-08 00:53:01.328724 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-08 00:53:01.328729 | orchestrator | Monday 08 September 2025 00:50:21 +0000 (0:00:02.007) 0:03:52.355 ****** 2025-09-08 00:53:01.328735 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.328740 | orchestrator | 2025-09-08 00:53:01.328745 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-08 00:53:01.328751 | orchestrator | Monday 08 September 2025 00:50:22 +0000 (0:00:01.197) 0:03:53.552 ****** 2025-09-08 00:53:01.328772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.328779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.328785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.328794 | orchestrator | 2025-09-08 00:53:01.328800 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-08 00:53:01.328805 | orchestrator | Monday 08 September 2025 00:50:26 +0000 (0:00:03.638) 0:03:57.191 ****** 2025-09-08 00:53:01.328813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.328819 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.328839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.328846 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.328852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.328857 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.328862 | orchestrator | 2025-09-08 00:53:01.328868 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-08 00:53:01.328878 | orchestrator | Monday 08 September 2025 00:50:26 +0000 (0:00:00.522) 0:03:57.713 ****** 2025-09-08 00:53:01.328884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:53:01.328890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:53:01.328896 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.328901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:53:01.328907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:53:01.328912 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.328917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:53:01.328926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:53:01.328931 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.328937 | orchestrator | 2025-09-08 00:53:01.328942 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-08 00:53:01.328948 | orchestrator | Monday 08 September 2025 00:50:27 +0000 (0:00:00.739) 0:03:58.452 ****** 2025-09-08 00:53:01.328953 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.328958 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.328964 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.328970 | orchestrator | 2025-09-08 00:53:01.328977 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-08 00:53:01.328983 | orchestrator | Monday 08 September 2025 00:50:28 +0000 (0:00:01.209) 0:03:59.662 ****** 2025-09-08 00:53:01.328990 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.328996 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.329002 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.329008 | orchestrator | 2025-09-08 00:53:01.329014 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-08 00:53:01.329021 | orchestrator | Monday 08 September 2025 00:50:31 +0000 (0:00:02.203) 0:04:01.866 ****** 2025-09-08 00:53:01.329028 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.329034 | orchestrator | 2025-09-08 00:53:01.329041 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-08 00:53:01.329047 | orchestrator | Monday 08 September 2025 00:50:32 +0000 (0:00:01.570) 0:04:03.436 ****** 2025-09-08 00:53:01.329070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.329083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.329094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.329149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329167 | orchestrator | 2025-09-08 00:53:01.329173 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-08 00:53:01.329179 | orchestrator | Monday 08 September 2025 00:50:36 +0000 (0:00:04.175) 0:04:07.611 ****** 2025-09-08 00:53:01.329201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.329212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329226 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.329236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.329244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329257 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.329279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.329291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.329304 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.329311 | orchestrator | 2025-09-08 00:53:01.329318 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-08 00:53:01.329324 | orchestrator | Monday 08 September 2025 00:50:37 +0000 (0:00:00.956) 0:04:08.568 ****** 2025-09-08 00:53:01.329333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329356 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.329361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329401 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.329408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:53:01.329430 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.329435 | orchestrator | 2025-09-08 00:53:01.329440 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-08 00:53:01.329446 | orchestrator | Monday 08 September 2025 00:50:38 +0000 (0:00:01.214) 0:04:09.782 ****** 2025-09-08 00:53:01.329451 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.329457 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.329462 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.329467 | orchestrator | 2025-09-08 00:53:01.329473 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-08 00:53:01.329478 | orchestrator | Monday 08 September 2025 00:50:40 +0000 (0:00:01.306) 0:04:11.089 ****** 2025-09-08 00:53:01.329483 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.329489 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.329494 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.329499 | orchestrator | 2025-09-08 00:53:01.329516 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-08 00:53:01.329522 | orchestrator | Monday 08 September 2025 00:50:42 +0000 (0:00:02.049) 0:04:13.139 ****** 2025-09-08 00:53:01.329527 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.329533 | orchestrator | 2025-09-08 00:53:01.329538 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-08 00:53:01.329544 | orchestrator | Monday 08 September 2025 00:50:43 +0000 (0:00:01.552) 0:04:14.691 ****** 2025-09-08 00:53:01.329550 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-08 00:53:01.329555 | orchestrator | 2025-09-08 00:53:01.329561 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-08 00:53:01.329566 | orchestrator | Monday 08 September 2025 00:50:44 +0000 (0:00:00.898) 0:04:15.589 ****** 2025-09-08 00:53:01.329575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-08 00:53:01.329585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-08 00:53:01.329591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-08 00:53:01.329596 | orchestrator | 2025-09-08 00:53:01.329602 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-08 00:53:01.329607 | orchestrator | Monday 08 September 2025 00:50:48 +0000 (0:00:04.192) 0:04:19.782 ****** 2025-09-08 00:53:01.329628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329635 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.329641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329646 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.329652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329657 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.329663 | orchestrator | 2025-09-08 00:53:01.329668 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-08 00:53:01.329673 | orchestrator | Monday 08 September 2025 00:50:50 +0000 (0:00:01.464) 0:04:21.246 ****** 2025-09-08 00:53:01.329679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:53:01.329685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:53:01.329694 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.329699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:53:01.329708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:53:01.329714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.329719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:53:01.329725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:53:01.329730 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.329736 | orchestrator | 2025-09-08 00:53:01.329741 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-08 00:53:01.329746 | orchestrator | Monday 08 September 2025 00:50:52 +0000 (0:00:01.565) 0:04:22.812 ****** 2025-09-08 00:53:01.329751 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.329757 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.329762 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.329767 | orchestrator | 2025-09-08 00:53:01.329773 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-08 00:53:01.329778 | orchestrator | Monday 08 September 2025 00:50:54 +0000 (0:00:02.564) 0:04:25.376 ****** 2025-09-08 00:53:01.329783 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.329789 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.329794 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.329799 | orchestrator | 2025-09-08 00:53:01.329804 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-08 00:53:01.329810 | orchestrator | Monday 08 September 2025 00:50:57 +0000 (0:00:03.078) 0:04:28.454 ****** 2025-09-08 00:53:01.329830 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-08 00:53:01.329836 | orchestrator | 2025-09-08 00:53:01.329842 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-08 00:53:01.329847 | orchestrator | Monday 08 September 2025 00:50:59 +0000 (0:00:01.478) 0:04:29.933 ****** 2025-09-08 00:53:01.329853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329858 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.329864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329874 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.329880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329886 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.329891 | orchestrator | 2025-09-08 00:53:01.329896 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-08 00:53:01.329902 | orchestrator | Monday 08 September 2025 00:51:00 +0000 (0:00:01.291) 0:04:31.224 ****** 2025-09-08 00:53:01.329910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329916 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.329921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329927 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.329932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:53:01.329938 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.329943 | orchestrator | 2025-09-08 00:53:01.329948 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-08 00:53:01.329954 | orchestrator | Monday 08 September 2025 00:51:01 +0000 (0:00:01.433) 0:04:32.658 ****** 2025-09-08 00:53:01.329959 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.329964 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.329970 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.329975 | orchestrator | 2025-09-08 00:53:01.329994 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-08 00:53:01.330000 | orchestrator | Monday 08 September 2025 00:51:03 +0000 (0:00:01.954) 0:04:34.612 ****** 2025-09-08 00:53:01.330006 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.330011 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.330049 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.330055 | orchestrator | 2025-09-08 00:53:01.330061 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-08 00:53:01.330066 | orchestrator | Monday 08 September 2025 00:51:06 +0000 (0:00:02.371) 0:04:36.984 ****** 2025-09-08 00:53:01.330076 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.330081 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.330087 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.330092 | orchestrator | 2025-09-08 00:53:01.330098 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-08 00:53:01.330103 | orchestrator | Monday 08 September 2025 00:51:09 +0000 (0:00:02.983) 0:04:39.968 ****** 2025-09-08 00:53:01.330109 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-08 00:53:01.330114 | orchestrator | 2025-09-08 00:53:01.330120 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-08 00:53:01.330125 | orchestrator | Monday 08 September 2025 00:51:10 +0000 (0:00:00.888) 0:04:40.856 ****** 2025-09-08 00:53:01.330131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:53:01.330137 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.330142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:53:01.330148 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.330157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:53:01.330163 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.330169 | orchestrator | 2025-09-08 00:53:01.330174 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-08 00:53:01.330180 | orchestrator | Monday 08 September 2025 00:51:11 +0000 (0:00:01.613) 0:04:42.470 ****** 2025-09-08 00:53:01.330185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:53:01.330191 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.330196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:53:01.330217 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.330240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:53:01.330246 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.330252 | orchestrator | 2025-09-08 00:53:01.330257 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-08 00:53:01.330262 | orchestrator | Monday 08 September 2025 00:51:13 +0000 (0:00:01.352) 0:04:43.822 ****** 2025-09-08 00:53:01.330268 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.330273 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.330279 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.330284 | orchestrator | 2025-09-08 00:53:01.330289 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-08 00:53:01.330295 | orchestrator | Monday 08 September 2025 00:51:14 +0000 (0:00:01.526) 0:04:45.349 ****** 2025-09-08 00:53:01.330300 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.330305 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.330311 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.330316 | orchestrator | 2025-09-08 00:53:01.330321 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-08 00:53:01.330327 | orchestrator | Monday 08 September 2025 00:51:16 +0000 (0:00:02.377) 0:04:47.726 ****** 2025-09-08 00:53:01.330332 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.330337 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.330343 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.330348 | orchestrator | 2025-09-08 00:53:01.330353 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-08 00:53:01.330359 | orchestrator | Monday 08 September 2025 00:51:20 +0000 (0:00:03.211) 0:04:50.937 ****** 2025-09-08 00:53:01.330364 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.330369 | orchestrator | 2025-09-08 00:53:01.330375 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-08 00:53:01.330380 | orchestrator | Monday 08 September 2025 00:51:21 +0000 (0:00:01.607) 0:04:52.545 ****** 2025-09-08 00:53:01.330391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.330397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:53:01.330407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.330435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:53:01.330446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.330454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.330493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.330499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:53:01.330544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.330569 | orchestrator | 2025-09-08 00:53:01.330575 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-08 00:53:01.330580 | orchestrator | Monday 08 September 2025 00:51:25 +0000 (0:00:03.485) 0:04:56.030 ****** 2025-09-08 00:53:01.330603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.330610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:53:01.330616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.330634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:53:01.330645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.330665 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.330671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.330687 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.330695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.330704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:53:01.330709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:53:01.330733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:53:01.330738 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.330743 | orchestrator | 2025-09-08 00:53:01.330748 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-08 00:53:01.330752 | orchestrator | Monday 08 September 2025 00:51:25 +0000 (0:00:00.733) 0:04:56.764 ****** 2025-09-08 00:53:01.330757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:53:01.330762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:53:01.330767 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.330772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:53:01.330780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:53:01.330785 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.330790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:53:01.330797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:53:01.330802 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.330807 | orchestrator | 2025-09-08 00:53:01.330812 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-08 00:53:01.330817 | orchestrator | Monday 08 September 2025 00:51:27 +0000 (0:00:01.529) 0:04:58.293 ****** 2025-09-08 00:53:01.330821 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.330826 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.330831 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.330835 | orchestrator | 2025-09-08 00:53:01.330840 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-08 00:53:01.330845 | orchestrator | Monday 08 September 2025 00:51:28 +0000 (0:00:01.446) 0:04:59.740 ****** 2025-09-08 00:53:01.330850 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.330854 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.330859 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.330864 | orchestrator | 2025-09-08 00:53:01.330868 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-08 00:53:01.330873 | orchestrator | Monday 08 September 2025 00:51:31 +0000 (0:00:02.185) 0:05:01.926 ****** 2025-09-08 00:53:01.330878 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.330883 | orchestrator | 2025-09-08 00:53:01.330887 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-08 00:53:01.330892 | orchestrator | Monday 08 September 2025 00:51:32 +0000 (0:00:01.422) 0:05:03.348 ****** 2025-09-08 00:53:01.330910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:53:01.330917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:53:01.330926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:53:01.330934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:53:01.330953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:53:01.330959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:53:01.330968 | orchestrator | 2025-09-08 00:53:01.330973 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-08 00:53:01.330978 | orchestrator | Monday 08 September 2025 00:51:37 +0000 (0:00:05.342) 0:05:08.691 ****** 2025-09-08 00:53:01.330983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:53:01.330993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:53:01.330998 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:53:01.331022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:53:01.331031 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:53:01.331044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:53:01.331050 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331054 | orchestrator | 2025-09-08 00:53:01.331059 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-08 00:53:01.331064 | orchestrator | Monday 08 September 2025 00:51:38 +0000 (0:00:00.678) 0:05:09.369 ****** 2025-09-08 00:53:01.331069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-08 00:53:01.331074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:53:01.331079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:53:01.331084 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-08 00:53:01.331106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:53:01.331112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:53:01.331120 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-08 00:53:01.331130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:53:01.331135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:53:01.331140 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331144 | orchestrator | 2025-09-08 00:53:01.331149 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-08 00:53:01.331154 | orchestrator | Monday 08 September 2025 00:51:39 +0000 (0:00:00.940) 0:05:10.310 ****** 2025-09-08 00:53:01.331159 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331163 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331168 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331173 | orchestrator | 2025-09-08 00:53:01.331178 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-08 00:53:01.331182 | orchestrator | Monday 08 September 2025 00:51:40 +0000 (0:00:00.830) 0:05:11.140 ****** 2025-09-08 00:53:01.331187 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331192 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331196 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331201 | orchestrator | 2025-09-08 00:53:01.331206 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-08 00:53:01.331210 | orchestrator | Monday 08 September 2025 00:51:41 +0000 (0:00:01.337) 0:05:12.478 ****** 2025-09-08 00:53:01.331215 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.331220 | orchestrator | 2025-09-08 00:53:01.331225 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-08 00:53:01.331229 | orchestrator | Monday 08 September 2025 00:51:43 +0000 (0:00:01.419) 0:05:13.898 ****** 2025-09-08 00:53:01.331237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 00:53:01.331242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:53:01.331247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 00:53:01.331285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:53:01.331293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 00:53:01.331331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:53:01.331336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 00:53:01.331364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:53:01.331370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 00:53:01.331393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:53:01.331404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 00:53:01.331427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:53:01.331432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331452 | orchestrator | 2025-09-08 00:53:01.331457 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-08 00:53:01.331464 | orchestrator | Monday 08 September 2025 00:51:47 +0000 (0:00:04.512) 0:05:18.410 ****** 2025-09-08 00:53:01.331469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 00:53:01.331474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:53:01.331479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 00:53:01.331498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:53:01.331523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 00:53:01.331538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:53:01.331553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 00:53:01.331607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 00:53:01.331620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:53:01.331634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:53:01.331639 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331683 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 00:53:01.331696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:53:01.331701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:53:01.331716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:53:01.331722 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331726 | orchestrator | 2025-09-08 00:53:01.331731 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-08 00:53:01.331736 | orchestrator | Monday 08 September 2025 00:51:48 +0000 (0:00:01.224) 0:05:19.635 ****** 2025-09-08 00:53:01.331741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-08 00:53:01.331746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-08 00:53:01.331751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:53:01.331757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:53:01.331762 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-08 00:53:01.331774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-08 00:53:01.331779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:53:01.331784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:53:01.331789 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-08 00:53:01.331799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-08 00:53:01.331807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:53:01.331812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:53:01.331817 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331822 | orchestrator | 2025-09-08 00:53:01.331826 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-08 00:53:01.331831 | orchestrator | Monday 08 September 2025 00:51:49 +0000 (0:00:01.006) 0:05:20.641 ****** 2025-09-08 00:53:01.331836 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331841 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331846 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331851 | orchestrator | 2025-09-08 00:53:01.331855 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-08 00:53:01.331860 | orchestrator | Monday 08 September 2025 00:51:50 +0000 (0:00:00.491) 0:05:21.132 ****** 2025-09-08 00:53:01.331865 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331872 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331877 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331882 | orchestrator | 2025-09-08 00:53:01.331887 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-08 00:53:01.331891 | orchestrator | Monday 08 September 2025 00:51:51 +0000 (0:00:01.449) 0:05:22.582 ****** 2025-09-08 00:53:01.331896 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.331901 | orchestrator | 2025-09-08 00:53:01.331906 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-08 00:53:01.331911 | orchestrator | Monday 08 September 2025 00:51:53 +0000 (0:00:01.753) 0:05:24.336 ****** 2025-09-08 00:53:01.331916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:53:01.331925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:53:01.331933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:53:01.331939 | orchestrator | 2025-09-08 00:53:01.331944 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-08 00:53:01.331948 | orchestrator | Monday 08 September 2025 00:51:56 +0000 (0:00:02.589) 0:05:26.925 ****** 2025-09-08 00:53:01.331956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-08 00:53:01.331962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-08 00:53:01.331967 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.331972 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.331979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-08 00:53:01.331988 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.331993 | orchestrator | 2025-09-08 00:53:01.331998 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-08 00:53:01.332003 | orchestrator | Monday 08 September 2025 00:51:56 +0000 (0:00:00.448) 0:05:27.373 ****** 2025-09-08 00:53:01.332007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-08 00:53:01.332012 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-08 00:53:01.332022 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-08 00:53:01.332032 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332036 | orchestrator | 2025-09-08 00:53:01.332041 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-08 00:53:01.332046 | orchestrator | Monday 08 September 2025 00:51:57 +0000 (0:00:01.019) 0:05:28.392 ****** 2025-09-08 00:53:01.332051 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332055 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332060 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332065 | orchestrator | 2025-09-08 00:53:01.332070 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-08 00:53:01.332074 | orchestrator | Monday 08 September 2025 00:51:58 +0000 (0:00:00.445) 0:05:28.838 ****** 2025-09-08 00:53:01.332079 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332084 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332089 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332093 | orchestrator | 2025-09-08 00:53:01.332098 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-08 00:53:01.332103 | orchestrator | Monday 08 September 2025 00:51:59 +0000 (0:00:01.378) 0:05:30.216 ****** 2025-09-08 00:53:01.332108 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:53:01.332113 | orchestrator | 2025-09-08 00:53:01.332120 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-08 00:53:01.332125 | orchestrator | Monday 08 September 2025 00:52:01 +0000 (0:00:01.788) 0:05:32.005 ****** 2025-09-08 00:53:01.332130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.332137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.332146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.332151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.332161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.332166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-08 00:53:01.332174 | orchestrator | 2025-09-08 00:53:01.332181 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-08 00:53:01.332186 | orchestrator | Monday 08 September 2025 00:52:07 +0000 (0:00:06.582) 0:05:38.588 ****** 2025-09-08 00:53:01.332191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.332196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.332201 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.332214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.332223 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.332236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-08 00:53:01.332241 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332245 | orchestrator | 2025-09-08 00:53:01.332250 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-08 00:53:01.332255 | orchestrator | Monday 08 September 2025 00:52:08 +0000 (0:00:00.636) 0:05:39.224 ****** 2025-09-08 00:53:01.332260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332282 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332311 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:53:01.332337 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332342 | orchestrator | 2025-09-08 00:53:01.332347 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-08 00:53:01.332352 | orchestrator | Monday 08 September 2025 00:52:10 +0000 (0:00:01.615) 0:05:40.840 ****** 2025-09-08 00:53:01.332356 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.332361 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.332366 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.332371 | orchestrator | 2025-09-08 00:53:01.332375 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-08 00:53:01.332380 | orchestrator | Monday 08 September 2025 00:52:11 +0000 (0:00:01.378) 0:05:42.219 ****** 2025-09-08 00:53:01.332385 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.332390 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.332394 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.332399 | orchestrator | 2025-09-08 00:53:01.332404 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-08 00:53:01.332409 | orchestrator | Monday 08 September 2025 00:52:13 +0000 (0:00:02.163) 0:05:44.382 ****** 2025-09-08 00:53:01.332413 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332418 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332423 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332427 | orchestrator | 2025-09-08 00:53:01.332432 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-08 00:53:01.332437 | orchestrator | Monday 08 September 2025 00:52:13 +0000 (0:00:00.352) 0:05:44.734 ****** 2025-09-08 00:53:01.332442 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332446 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332451 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332456 | orchestrator | 2025-09-08 00:53:01.332461 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-08 00:53:01.332465 | orchestrator | Monday 08 September 2025 00:52:14 +0000 (0:00:00.333) 0:05:45.068 ****** 2025-09-08 00:53:01.332473 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332478 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332483 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332488 | orchestrator | 2025-09-08 00:53:01.332493 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-08 00:53:01.332497 | orchestrator | Monday 08 September 2025 00:52:14 +0000 (0:00:00.636) 0:05:45.704 ****** 2025-09-08 00:53:01.332513 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332518 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332523 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332528 | orchestrator | 2025-09-08 00:53:01.332532 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-08 00:53:01.332537 | orchestrator | Monday 08 September 2025 00:52:15 +0000 (0:00:00.330) 0:05:46.035 ****** 2025-09-08 00:53:01.332542 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332549 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332554 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332559 | orchestrator | 2025-09-08 00:53:01.332564 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-08 00:53:01.332568 | orchestrator | Monday 08 September 2025 00:52:15 +0000 (0:00:00.326) 0:05:46.361 ****** 2025-09-08 00:53:01.332573 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332578 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332583 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332587 | orchestrator | 2025-09-08 00:53:01.332592 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-08 00:53:01.332597 | orchestrator | Monday 08 September 2025 00:52:16 +0000 (0:00:00.894) 0:05:47.256 ****** 2025-09-08 00:53:01.332602 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.332606 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.332611 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.332616 | orchestrator | 2025-09-08 00:53:01.332620 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-08 00:53:01.332625 | orchestrator | Monday 08 September 2025 00:52:17 +0000 (0:00:00.763) 0:05:48.020 ****** 2025-09-08 00:53:01.332630 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.332635 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.332639 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.332644 | orchestrator | 2025-09-08 00:53:01.332649 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-08 00:53:01.332654 | orchestrator | Monday 08 September 2025 00:52:17 +0000 (0:00:00.380) 0:05:48.401 ****** 2025-09-08 00:53:01.332658 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.332663 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.332668 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.332672 | orchestrator | 2025-09-08 00:53:01.332677 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-08 00:53:01.332682 | orchestrator | Monday 08 September 2025 00:52:18 +0000 (0:00:00.957) 0:05:49.358 ****** 2025-09-08 00:53:01.332687 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.332691 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.332696 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.332701 | orchestrator | 2025-09-08 00:53:01.332705 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-08 00:53:01.332710 | orchestrator | Monday 08 September 2025 00:52:19 +0000 (0:00:01.245) 0:05:50.604 ****** 2025-09-08 00:53:01.332715 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.332720 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.332727 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.332732 | orchestrator | 2025-09-08 00:53:01.332736 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-08 00:53:01.332741 | orchestrator | Monday 08 September 2025 00:52:20 +0000 (0:00:00.925) 0:05:51.530 ****** 2025-09-08 00:53:01.332746 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.332754 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.332759 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.332764 | orchestrator | 2025-09-08 00:53:01.332769 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-08 00:53:01.332773 | orchestrator | Monday 08 September 2025 00:52:29 +0000 (0:00:08.394) 0:05:59.924 ****** 2025-09-08 00:53:01.332778 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.332783 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.332788 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.332792 | orchestrator | 2025-09-08 00:53:01.332797 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-08 00:53:01.332802 | orchestrator | Monday 08 September 2025 00:52:29 +0000 (0:00:00.756) 0:06:00.681 ****** 2025-09-08 00:53:01.332807 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.332811 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.332816 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.332821 | orchestrator | 2025-09-08 00:53:01.332826 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-08 00:53:01.332830 | orchestrator | Monday 08 September 2025 00:52:43 +0000 (0:00:13.173) 0:06:13.854 ****** 2025-09-08 00:53:01.332835 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.332840 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.332844 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.332849 | orchestrator | 2025-09-08 00:53:01.332854 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-08 00:53:01.332859 | orchestrator | Monday 08 September 2025 00:52:44 +0000 (0:00:01.124) 0:06:14.979 ****** 2025-09-08 00:53:01.332863 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:53:01.332868 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:53:01.332873 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:53:01.332878 | orchestrator | 2025-09-08 00:53:01.332882 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-08 00:53:01.332887 | orchestrator | Monday 08 September 2025 00:52:53 +0000 (0:00:09.543) 0:06:24.523 ****** 2025-09-08 00:53:01.332892 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332897 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332901 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332906 | orchestrator | 2025-09-08 00:53:01.332911 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-08 00:53:01.332915 | orchestrator | Monday 08 September 2025 00:52:54 +0000 (0:00:00.356) 0:06:24.879 ****** 2025-09-08 00:53:01.332920 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332925 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332930 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332934 | orchestrator | 2025-09-08 00:53:01.332939 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-08 00:53:01.332944 | orchestrator | Monday 08 September 2025 00:52:54 +0000 (0:00:00.353) 0:06:25.233 ****** 2025-09-08 00:53:01.332949 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332953 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332958 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332963 | orchestrator | 2025-09-08 00:53:01.332967 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-08 00:53:01.332972 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.705) 0:06:25.939 ****** 2025-09-08 00:53:01.332977 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.332982 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.332986 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.332991 | orchestrator | 2025-09-08 00:53:01.332998 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-08 00:53:01.333003 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.392) 0:06:26.331 ****** 2025-09-08 00:53:01.333008 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.333013 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.333020 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.333025 | orchestrator | 2025-09-08 00:53:01.333030 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-08 00:53:01.333035 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.410) 0:06:26.742 ****** 2025-09-08 00:53:01.333039 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:53:01.333044 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:53:01.333049 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:53:01.333054 | orchestrator | 2025-09-08 00:53:01.333058 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-08 00:53:01.333063 | orchestrator | Monday 08 September 2025 00:52:56 +0000 (0:00:00.423) 0:06:27.166 ****** 2025-09-08 00:53:01.333068 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.333072 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.333077 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.333082 | orchestrator | 2025-09-08 00:53:01.333087 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-08 00:53:01.333091 | orchestrator | Monday 08 September 2025 00:52:57 +0000 (0:00:01.426) 0:06:28.592 ****** 2025-09-08 00:53:01.333096 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:53:01.333101 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:53:01.333106 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:53:01.333110 | orchestrator | 2025-09-08 00:53:01.333115 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:53:01.333120 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-08 00:53:01.333125 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-08 00:53:01.333130 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-08 00:53:01.333135 | orchestrator | 2025-09-08 00:53:01.333139 | orchestrator | 2025-09-08 00:53:01.333146 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:53:01.333151 | orchestrator | Monday 08 September 2025 00:52:58 +0000 (0:00:00.868) 0:06:29.461 ****** 2025-09-08 00:53:01.333156 | orchestrator | =============================================================================== 2025-09-08 00:53:01.333161 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.17s 2025-09-08 00:53:01.333165 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.54s 2025-09-08 00:53:01.333170 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.39s 2025-09-08 00:53:01.333175 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.62s 2025-09-08 00:53:01.333180 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.58s 2025-09-08 00:53:01.333184 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.34s 2025-09-08 00:53:01.333189 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.56s 2025-09-08 00:53:01.333194 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.51s 2025-09-08 00:53:01.333199 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.49s 2025-09-08 00:53:01.333203 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.46s 2025-09-08 00:53:01.333208 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.41s 2025-09-08 00:53:01.333213 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.31s 2025-09-08 00:53:01.333217 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.25s 2025-09-08 00:53:01.333222 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.19s 2025-09-08 00:53:01.333227 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.19s 2025-09-08 00:53:01.333235 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.18s 2025-09-08 00:53:01.333240 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.08s 2025-09-08 00:53:01.333245 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.88s 2025-09-08 00:53:01.333249 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.75s 2025-09-08 00:53:01.333254 | orchestrator | loadbalancer : Ensuring config directories exist ------------------------ 3.69s 2025-09-08 00:53:01.333259 | orchestrator | 2025-09-08 00:53:01 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:01.333264 | orchestrator | 2025-09-08 00:53:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:04.371276 | orchestrator | 2025-09-08 00:53:04 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:04.372208 | orchestrator | 2025-09-08 00:53:04 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:04.373267 | orchestrator | 2025-09-08 00:53:04 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:04.373559 | orchestrator | 2025-09-08 00:53:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:07.419357 | orchestrator | 2025-09-08 00:53:07 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:07.419721 | orchestrator | 2025-09-08 00:53:07 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:07.421645 | orchestrator | 2025-09-08 00:53:07 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:07.421669 | orchestrator | 2025-09-08 00:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:10.462869 | orchestrator | 2025-09-08 00:53:10 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:10.462969 | orchestrator | 2025-09-08 00:53:10 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:10.462984 | orchestrator | 2025-09-08 00:53:10 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:10.462996 | orchestrator | 2025-09-08 00:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:13.492201 | orchestrator | 2025-09-08 00:53:13 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:13.492598 | orchestrator | 2025-09-08 00:53:13 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:13.493575 | orchestrator | 2025-09-08 00:53:13 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:13.493688 | orchestrator | 2025-09-08 00:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:16.535010 | orchestrator | 2025-09-08 00:53:16 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:16.535121 | orchestrator | 2025-09-08 00:53:16 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:16.535427 | orchestrator | 2025-09-08 00:53:16 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:16.535451 | orchestrator | 2025-09-08 00:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:19.579336 | orchestrator | 2025-09-08 00:53:19 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:19.579446 | orchestrator | 2025-09-08 00:53:19 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:19.579462 | orchestrator | 2025-09-08 00:53:19 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:19.579553 | orchestrator | 2025-09-08 00:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:22.606660 | orchestrator | 2025-09-08 00:53:22 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:22.606781 | orchestrator | 2025-09-08 00:53:22 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:22.607600 | orchestrator | 2025-09-08 00:53:22 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:22.607631 | orchestrator | 2025-09-08 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:25.660394 | orchestrator | 2025-09-08 00:53:25 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:25.660542 | orchestrator | 2025-09-08 00:53:25 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:25.660848 | orchestrator | 2025-09-08 00:53:25 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:25.660871 | orchestrator | 2025-09-08 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:28.699584 | orchestrator | 2025-09-08 00:53:28 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:28.703306 | orchestrator | 2025-09-08 00:53:28 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:28.706765 | orchestrator | 2025-09-08 00:53:28 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:28.706867 | orchestrator | 2025-09-08 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:31.756749 | orchestrator | 2025-09-08 00:53:31 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:31.756853 | orchestrator | 2025-09-08 00:53:31 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:31.757371 | orchestrator | 2025-09-08 00:53:31 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:31.757487 | orchestrator | 2025-09-08 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:34.884336 | orchestrator | 2025-09-08 00:53:34 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:34.885619 | orchestrator | 2025-09-08 00:53:34 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:34.887782 | orchestrator | 2025-09-08 00:53:34 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:34.887815 | orchestrator | 2025-09-08 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:37.935256 | orchestrator | 2025-09-08 00:53:37 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:37.935375 | orchestrator | 2025-09-08 00:53:37 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:37.936362 | orchestrator | 2025-09-08 00:53:37 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:37.936386 | orchestrator | 2025-09-08 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:40.985387 | orchestrator | 2025-09-08 00:53:40 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:40.988724 | orchestrator | 2025-09-08 00:53:40 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:40.991358 | orchestrator | 2025-09-08 00:53:40 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:40.991669 | orchestrator | 2025-09-08 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:44.042647 | orchestrator | 2025-09-08 00:53:44 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:44.044771 | orchestrator | 2025-09-08 00:53:44 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:44.047199 | orchestrator | 2025-09-08 00:53:44 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:44.047240 | orchestrator | 2025-09-08 00:53:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:47.087190 | orchestrator | 2025-09-08 00:53:47 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:47.089443 | orchestrator | 2025-09-08 00:53:47 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:47.091344 | orchestrator | 2025-09-08 00:53:47 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:47.091533 | orchestrator | 2025-09-08 00:53:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:50.141263 | orchestrator | 2025-09-08 00:53:50 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:50.143582 | orchestrator | 2025-09-08 00:53:50 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:50.145334 | orchestrator | 2025-09-08 00:53:50 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:50.145711 | orchestrator | 2025-09-08 00:53:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:53.190654 | orchestrator | 2025-09-08 00:53:53 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:53.192215 | orchestrator | 2025-09-08 00:53:53 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:53.193658 | orchestrator | 2025-09-08 00:53:53 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:53.193681 | orchestrator | 2025-09-08 00:53:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:56.238417 | orchestrator | 2025-09-08 00:53:56 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:56.242705 | orchestrator | 2025-09-08 00:53:56 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:56.242745 | orchestrator | 2025-09-08 00:53:56 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:56.242759 | orchestrator | 2025-09-08 00:53:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:59.298098 | orchestrator | 2025-09-08 00:53:59 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:53:59.298528 | orchestrator | 2025-09-08 00:53:59 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:53:59.299255 | orchestrator | 2025-09-08 00:53:59 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:53:59.299572 | orchestrator | 2025-09-08 00:53:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:02.349929 | orchestrator | 2025-09-08 00:54:02 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:02.352234 | orchestrator | 2025-09-08 00:54:02 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:02.355039 | orchestrator | 2025-09-08 00:54:02 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:02.355788 | orchestrator | 2025-09-08 00:54:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:05.392643 | orchestrator | 2025-09-08 00:54:05 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:05.393260 | orchestrator | 2025-09-08 00:54:05 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:05.394866 | orchestrator | 2025-09-08 00:54:05 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:05.395532 | orchestrator | 2025-09-08 00:54:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:08.440430 | orchestrator | 2025-09-08 00:54:08 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:08.441523 | orchestrator | 2025-09-08 00:54:08 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:08.442908 | orchestrator | 2025-09-08 00:54:08 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:08.443124 | orchestrator | 2025-09-08 00:54:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:11.501599 | orchestrator | 2025-09-08 00:54:11 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:11.504224 | orchestrator | 2025-09-08 00:54:11 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:11.506969 | orchestrator | 2025-09-08 00:54:11 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:11.507017 | orchestrator | 2025-09-08 00:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:14.549920 | orchestrator | 2025-09-08 00:54:14 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:14.550444 | orchestrator | 2025-09-08 00:54:14 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:14.553190 | orchestrator | 2025-09-08 00:54:14 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:14.553232 | orchestrator | 2025-09-08 00:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:17.602757 | orchestrator | 2025-09-08 00:54:17 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:17.603785 | orchestrator | 2025-09-08 00:54:17 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:17.605642 | orchestrator | 2025-09-08 00:54:17 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:17.606334 | orchestrator | 2025-09-08 00:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:20.651072 | orchestrator | 2025-09-08 00:54:20 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:20.651863 | orchestrator | 2025-09-08 00:54:20 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:20.653942 | orchestrator | 2025-09-08 00:54:20 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:20.654323 | orchestrator | 2025-09-08 00:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:23.701787 | orchestrator | 2025-09-08 00:54:23 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:23.703781 | orchestrator | 2025-09-08 00:54:23 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:23.706342 | orchestrator | 2025-09-08 00:54:23 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:23.706373 | orchestrator | 2025-09-08 00:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:26.751059 | orchestrator | 2025-09-08 00:54:26 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:26.753054 | orchestrator | 2025-09-08 00:54:26 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:26.755848 | orchestrator | 2025-09-08 00:54:26 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:26.755952 | orchestrator | 2025-09-08 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:29.800073 | orchestrator | 2025-09-08 00:54:29 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:29.801675 | orchestrator | 2025-09-08 00:54:29 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:29.805319 | orchestrator | 2025-09-08 00:54:29 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:29.805344 | orchestrator | 2025-09-08 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:32.849314 | orchestrator | 2025-09-08 00:54:32 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:32.853190 | orchestrator | 2025-09-08 00:54:32 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:32.863050 | orchestrator | 2025-09-08 00:54:32 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:32.863079 | orchestrator | 2025-09-08 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:35.913909 | orchestrator | 2025-09-08 00:54:35 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:35.915034 | orchestrator | 2025-09-08 00:54:35 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:35.916170 | orchestrator | 2025-09-08 00:54:35 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:35.916201 | orchestrator | 2025-09-08 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:38.959808 | orchestrator | 2025-09-08 00:54:38 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:38.961394 | orchestrator | 2025-09-08 00:54:38 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:38.963262 | orchestrator | 2025-09-08 00:54:38 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:38.963594 | orchestrator | 2025-09-08 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:42.002840 | orchestrator | 2025-09-08 00:54:42 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:42.004026 | orchestrator | 2025-09-08 00:54:42 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:42.005277 | orchestrator | 2025-09-08 00:54:42 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:42.005303 | orchestrator | 2025-09-08 00:54:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:45.044989 | orchestrator | 2025-09-08 00:54:45 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:45.046977 | orchestrator | 2025-09-08 00:54:45 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:45.049263 | orchestrator | 2025-09-08 00:54:45 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:45.049435 | orchestrator | 2025-09-08 00:54:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:48.090251 | orchestrator | 2025-09-08 00:54:48 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:48.092143 | orchestrator | 2025-09-08 00:54:48 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:48.094817 | orchestrator | 2025-09-08 00:54:48 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:48.094846 | orchestrator | 2025-09-08 00:54:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:51.147744 | orchestrator | 2025-09-08 00:54:51 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:51.149050 | orchestrator | 2025-09-08 00:54:51 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:51.150891 | orchestrator | 2025-09-08 00:54:51 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:51.150931 | orchestrator | 2025-09-08 00:54:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:54.196296 | orchestrator | 2025-09-08 00:54:54 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:54.197571 | orchestrator | 2025-09-08 00:54:54 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:54.199093 | orchestrator | 2025-09-08 00:54:54 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:54.199216 | orchestrator | 2025-09-08 00:54:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:57.244597 | orchestrator | 2025-09-08 00:54:57 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:54:57.246119 | orchestrator | 2025-09-08 00:54:57 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:54:57.249590 | orchestrator | 2025-09-08 00:54:57 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:54:57.249614 | orchestrator | 2025-09-08 00:54:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:00.295000 | orchestrator | 2025-09-08 00:55:00 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:00.296755 | orchestrator | 2025-09-08 00:55:00 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:55:00.298582 | orchestrator | 2025-09-08 00:55:00 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:00.298610 | orchestrator | 2025-09-08 00:55:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:03.344980 | orchestrator | 2025-09-08 00:55:03 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:03.345692 | orchestrator | 2025-09-08 00:55:03 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:55:03.348919 | orchestrator | 2025-09-08 00:55:03 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:03.349022 | orchestrator | 2025-09-08 00:55:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:06.403068 | orchestrator | 2025-09-08 00:55:06 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:06.405078 | orchestrator | 2025-09-08 00:55:06 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:55:06.406213 | orchestrator | 2025-09-08 00:55:06 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:06.406372 | orchestrator | 2025-09-08 00:55:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:09.457688 | orchestrator | 2025-09-08 00:55:09 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:09.459800 | orchestrator | 2025-09-08 00:55:09 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:55:09.462399 | orchestrator | 2025-09-08 00:55:09 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:09.462938 | orchestrator | 2025-09-08 00:55:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:12.516328 | orchestrator | 2025-09-08 00:55:12 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:12.517171 | orchestrator | 2025-09-08 00:55:12 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state STARTED 2025-09-08 00:55:12.519833 | orchestrator | 2025-09-08 00:55:12 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:12.519906 | orchestrator | 2025-09-08 00:55:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:15.563397 | orchestrator | 2025-09-08 00:55:15 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:15.572014 | orchestrator | 2025-09-08 00:55:15 | INFO  | Task dec56c4a-6057-4047-9342-7437f195924c is in state SUCCESS 2025-09-08 00:55:15.574410 | orchestrator | 2025-09-08 00:55:15.574438 | orchestrator | 2025-09-08 00:55:15.574446 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-08 00:55:15.574475 | orchestrator | 2025-09-08 00:55:15.574484 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-08 00:55:15.574491 | orchestrator | Monday 08 September 2025 00:43:51 +0000 (0:00:00.699) 0:00:00.699 ****** 2025-09-08 00:55:15.574501 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.574509 | orchestrator | 2025-09-08 00:55:15.574517 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-08 00:55:15.574524 | orchestrator | Monday 08 September 2025 00:43:52 +0000 (0:00:01.120) 0:00:01.820 ****** 2025-09-08 00:55:15.574531 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.574539 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.574546 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.574554 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.574561 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.574568 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.574575 | orchestrator | 2025-09-08 00:55:15.574583 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-08 00:55:15.574590 | orchestrator | Monday 08 September 2025 00:43:54 +0000 (0:00:01.666) 0:00:03.486 ****** 2025-09-08 00:55:15.574597 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.574604 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.574611 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.574630 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.574637 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.574644 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.574652 | orchestrator | 2025-09-08 00:55:15.574720 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-08 00:55:15.574730 | orchestrator | Monday 08 September 2025 00:43:55 +0000 (0:00:00.832) 0:00:04.318 ****** 2025-09-08 00:55:15.574775 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.574976 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.574984 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.574992 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.574999 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.575006 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.575013 | orchestrator | 2025-09-08 00:55:15.575020 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-08 00:55:15.575028 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:00.962) 0:00:05.280 ****** 2025-09-08 00:55:15.575035 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.575042 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.575049 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.575056 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.575082 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.575089 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.575097 | orchestrator | 2025-09-08 00:55:15.575104 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-08 00:55:15.575111 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:00.643) 0:00:05.923 ****** 2025-09-08 00:55:15.575118 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.575125 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.575132 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.575140 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.575147 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.575154 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.575161 | orchestrator | 2025-09-08 00:55:15.575169 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-08 00:55:15.575176 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:00.651) 0:00:06.575 ****** 2025-09-08 00:55:15.575183 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.575190 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.575197 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.575204 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.575211 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.575218 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.575225 | orchestrator | 2025-09-08 00:55:15.575232 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-08 00:55:15.575239 | orchestrator | Monday 08 September 2025 00:43:58 +0000 (0:00:00.861) 0:00:07.436 ****** 2025-09-08 00:55:15.575247 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.575255 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.575262 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.575270 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.575277 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.575284 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.575291 | orchestrator | 2025-09-08 00:55:15.575298 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-08 00:55:15.575305 | orchestrator | Monday 08 September 2025 00:43:59 +0000 (0:00:00.961) 0:00:08.398 ****** 2025-09-08 00:55:15.575312 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.575319 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.575326 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.575333 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.575340 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.575347 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.575354 | orchestrator | 2025-09-08 00:55:15.575361 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-08 00:55:15.575368 | orchestrator | Monday 08 September 2025 00:44:00 +0000 (0:00:01.594) 0:00:09.993 ****** 2025-09-08 00:55:15.575375 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:55:15.575383 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:55:15.575389 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:55:15.575397 | orchestrator | 2025-09-08 00:55:15.575404 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-08 00:55:15.575411 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:00.787) 0:00:10.781 ****** 2025-09-08 00:55:15.575418 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.575425 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.575432 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.575439 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.575446 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.575453 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.575487 | orchestrator | 2025-09-08 00:55:15.575503 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-08 00:55:15.575511 | orchestrator | Monday 08 September 2025 00:44:02 +0000 (0:00:01.046) 0:00:11.828 ****** 2025-09-08 00:55:15.575524 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:55:15.575531 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:55:15.575539 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:55:15.575546 | orchestrator | 2025-09-08 00:55:15.575553 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-08 00:55:15.575560 | orchestrator | Monday 08 September 2025 00:44:05 +0000 (0:00:02.825) 0:00:14.653 ****** 2025-09-08 00:55:15.575568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:55:15.575575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:55:15.575582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:55:15.575590 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.575597 | orchestrator | 2025-09-08 00:55:15.575605 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-08 00:55:15.575614 | orchestrator | Monday 08 September 2025 00:44:05 +0000 (0:00:00.371) 0:00:15.024 ****** 2025-09-08 00:55:15.575642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575672 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.575680 | orchestrator | 2025-09-08 00:55:15.575689 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-08 00:55:15.575697 | orchestrator | Monday 08 September 2025 00:44:06 +0000 (0:00:00.978) 0:00:16.003 ****** 2025-09-08 00:55:15.575707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575735 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.575744 | orchestrator | 2025-09-08 00:55:15.575753 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-08 00:55:15.575761 | orchestrator | Monday 08 September 2025 00:44:07 +0000 (0:00:00.186) 0:00:16.189 ****** 2025-09-08 00:55:15.575778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-08 00:44:03.333241', 'end': '2025-09-08 00:44:03.619644', 'delta': '0:00:00.286403', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-08 00:44:04.338336', 'end': '2025-09-08 00:44:04.637279', 'delta': '0:00:00.298943', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-08 00:44:05.084264', 'end': '2025-09-08 00:44:05.383931', 'delta': '0:00:00.299667', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.575817 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.575826 | orchestrator | 2025-09-08 00:55:15.575834 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-08 00:55:15.575842 | orchestrator | Monday 08 September 2025 00:44:07 +0000 (0:00:00.492) 0:00:16.682 ****** 2025-09-08 00:55:15.575851 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.575859 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.575868 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.575876 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.575883 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.575892 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.575953 | orchestrator | 2025-09-08 00:55:15.575962 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-08 00:55:15.575969 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:01.641) 0:00:18.323 ****** 2025-09-08 00:55:15.575977 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.575984 | orchestrator | 2025-09-08 00:55:15.575991 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-08 00:55:15.575998 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:00.756) 0:00:19.080 ****** 2025-09-08 00:55:15.576005 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576012 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.576019 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.576026 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.576033 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.576040 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.576047 | orchestrator | 2025-09-08 00:55:15.576054 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-08 00:55:15.576061 | orchestrator | Monday 08 September 2025 00:44:11 +0000 (0:00:01.541) 0:00:20.621 ****** 2025-09-08 00:55:15.576077 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576084 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.576091 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.576099 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.576105 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.576113 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.576120 | orchestrator | 2025-09-08 00:55:15.576127 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:55:15.576134 | orchestrator | Monday 08 September 2025 00:44:13 +0000 (0:00:01.699) 0:00:22.321 ****** 2025-09-08 00:55:15.576141 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576148 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.576155 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.576162 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.576169 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.576176 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.576183 | orchestrator | 2025-09-08 00:55:15.576190 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-08 00:55:15.576197 | orchestrator | Monday 08 September 2025 00:44:14 +0000 (0:00:01.451) 0:00:23.773 ****** 2025-09-08 00:55:15.576204 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576212 | orchestrator | 2025-09-08 00:55:15.576219 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-08 00:55:15.576226 | orchestrator | Monday 08 September 2025 00:44:14 +0000 (0:00:00.175) 0:00:23.948 ****** 2025-09-08 00:55:15.576233 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576240 | orchestrator | 2025-09-08 00:55:15.576247 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:55:15.576254 | orchestrator | Monday 08 September 2025 00:44:15 +0000 (0:00:00.372) 0:00:24.321 ****** 2025-09-08 00:55:15.576261 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576268 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.576275 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.576283 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.576330 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.576338 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.576345 | orchestrator | 2025-09-08 00:55:15.576357 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-08 00:55:15.576364 | orchestrator | Monday 08 September 2025 00:44:16 +0000 (0:00:01.162) 0:00:25.484 ****** 2025-09-08 00:55:15.576371 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576378 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.576385 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.576392 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.576399 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.576406 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.576413 | orchestrator | 2025-09-08 00:55:15.576420 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-08 00:55:15.576813 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:00.977) 0:00:26.461 ****** 2025-09-08 00:55:15.576823 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576830 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.576849 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.576856 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.576894 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.576901 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.576908 | orchestrator | 2025-09-08 00:55:15.576915 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-08 00:55:15.576922 | orchestrator | Monday 08 September 2025 00:44:18 +0000 (0:00:00.920) 0:00:27.382 ****** 2025-09-08 00:55:15.576929 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.576936 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.576990 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.577041 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.577051 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.577412 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.577427 | orchestrator | 2025-09-08 00:55:15.577434 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-08 00:55:15.577441 | orchestrator | Monday 08 September 2025 00:44:19 +0000 (0:00:01.279) 0:00:28.662 ****** 2025-09-08 00:55:15.577448 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.577495 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.577504 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.577511 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.577518 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.577525 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.577532 | orchestrator | 2025-09-08 00:55:15.577539 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-08 00:55:15.577547 | orchestrator | Monday 08 September 2025 00:44:20 +0000 (0:00:00.884) 0:00:29.546 ****** 2025-09-08 00:55:15.577554 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.577561 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.577568 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.577575 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.577621 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.577629 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.577636 | orchestrator | 2025-09-08 00:55:15.577644 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-08 00:55:15.577651 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.755) 0:00:30.302 ****** 2025-09-08 00:55:15.577658 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.577665 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.577673 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.577680 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.577687 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.577694 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.577701 | orchestrator | 2025-09-08 00:55:15.577758 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-08 00:55:15.577766 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.582) 0:00:30.884 ****** 2025-09-08 00:55:15.577775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b', 'dm-uuid-LVM-Y0fi8lofW9mz3to22zm3kbYL7KhM63Y1xlLKU16Wd1xmsixYPceTgcWXPxL1aXLJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.577785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b', 'dm-uuid-LVM-xGVWByu1BSZhpyUwFR2O5UYeo7Gtrkxn0tf2jzAkUNYUZtr15YuY2x3DX2l8Q9zK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.577815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.577833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.577841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b', 'dm-uuid-LVM-VzUCABi2BuQurjhQCMyt68tIclROzO0ZBMrjgqoBkw7h7LcfDuM8CcFlGhUXcEr9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7', 'dm-uuid-LVM-wucYTeDbWI7QcvaQP4VymYqWox5BgGvEFFgtYXXIG7lpzKethcm4zCW52693sfjv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.578718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.578727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GcQWON-gSjm-sim7-whlw-7wIw-EiAC-zzmCr8', 'scsi-0QEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733', 'scsi-SQEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1', 'dm-uuid-LVM-eqsdTLwl2bClC02oRazwntyfs3menYK4OsSbFlRGX7fYoYxlReT90CLA97P8MMZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WWtSOz-FohK-vviu-kCAU-m6xa-xJ2T-jmK4pl', 'scsi-0QEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1', 'scsi-SQEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd', 'dm-uuid-LVM-bDkGLzHpLD658aJtO5kZxXyH8rTVEF09elbHrBEgYzpzpjqvnJgfUL1koASdL1iJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41', 'scsi-SQEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HHbReR-4ZVt-wANb-cQX5-t55v-9em9-DaJpxU', 'scsi-0QEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab', 'scsi-SQEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fXg5xk-Hlji-j4Cf-XVAJ-bXQn-YbEm-JQawdo', 'scsi-0QEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55', 'scsi-SQEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e', 'scsi-SQEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579772 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.579825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Rhjrl9-NLaA-oEAo-wptm-3z2r-6sbO-DYv4y8', 'scsi-0QEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989', 'scsi-SQEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k22s6L-vE8Z-OfUE-wH35-l99G-jzdB-FLSIKu', 'scsi-0QEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c', 'scsi-SQEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9', 'scsi-SQEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.579886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.579992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part1', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part14', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part15', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part16', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.580081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.580092 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.580110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part1', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part14', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part15', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part16', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.580245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.580253 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.580269 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.580277 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.580284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:55:15.580410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.580481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:55:15.580494 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.580502 | orchestrator | 2025-09-08 00:55:15.580509 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-08 00:55:15.580518 | orchestrator | Monday 08 September 2025 00:44:23 +0000 (0:00:01.505) 0:00:32.390 ****** 2025-09-08 00:55:15.580531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b', 'dm-uuid-LVM-Y0fi8lofW9mz3to22zm3kbYL7KhM63Y1xlLKU16Wd1xmsixYPceTgcWXPxL1aXLJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b', 'dm-uuid-LVM-xGVWByu1BSZhpyUwFR2O5UYeo7Gtrkxn0tf2jzAkUNYUZtr15YuY2x3DX2l8Q9zK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580554 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GcQWON-gSjm-sim7-whlw-7wIw-EiAC-zzmCr8', 'scsi-0QEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733', 'scsi-SQEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WWtSOz-FohK-vviu-kCAU-m6xa-xJ2T-jmK4pl', 'scsi-0QEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1', 'scsi-SQEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41', 'scsi-SQEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580909 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580925 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b', 'dm-uuid-LVM-VzUCABi2BuQurjhQCMyt68tIclROzO0ZBMrjgqoBkw7h7LcfDuM8CcFlGhUXcEr9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580933 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.580941 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7', 'dm-uuid-LVM-wucYTeDbWI7QcvaQP4VymYqWox5BgGvEFFgtYXXIG7lpzKethcm4zCW52693sfjv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580962 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.580970 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581035 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1', 'dm-uuid-LVM-eqsdTLwl2bClC02oRazwntyfs3menYK4OsSbFlRGX7fYoYxlReT90CLA97P8MMZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581059 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581074 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581137 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581149 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd', 'dm-uuid-LVM-bDkGLzHpLD658aJtO5kZxXyH8rTVEF09elbHrBEgYzpzpjqvnJgfUL1koASdL1iJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581229 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part1', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part14', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part15', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part16', 'scsi-SQEMU_QEMU_HARDDISK_105ff901-f58d-459b-b46a-1fffc4887b06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581269 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581284 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581292 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581348 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581368 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581377 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581384 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581403 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581411 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581512 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581547 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581555 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581562 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581629 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part1', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part14', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part15', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part16', 'scsi-SQEMU_QEMU_HARDDISK_352591b1-afb2-4164-a476-424b5209d609-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581652 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581660 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581722 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581743 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HHbReR-4ZVt-wANb-cQX5-t55v-9em9-DaJpxU', 'scsi-0QEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab', 'scsi-SQEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581810 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fXg5xk-Hlji-j4Cf-XVAJ-bXQn-YbEm-JQawdo', 'scsi-0QEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55', 'scsi-SQEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581827 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e', 'scsi-SQEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581848 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.581856 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581864 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.581927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581944 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.581956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Rhjrl9-NLaA-oEAo-wptm-3z2r-6sbO-DYv4y8', 'scsi-0QEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989', 'scsi-SQEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k22s6L-vE8Z-OfUE-wH35-l99G-jzdB-FLSIKu', 'scsi-0QEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c', 'scsi-SQEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9', 'scsi-SQEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.581992 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.582075 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582103 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582113 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582121 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582128 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582136 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582198 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582209 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582222 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e113876-c434-4dec-99d9-345ed786448b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582230 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:55:15.582243 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.582250 | orchestrator | 2025-09-08 00:55:15.582258 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-08 00:55:15.582265 | orchestrator | Monday 08 September 2025 00:44:24 +0000 (0:00:01.169) 0:00:33.559 ****** 2025-09-08 00:55:15.582315 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.582326 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.582333 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.582340 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.582347 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.582354 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.582361 | orchestrator | 2025-09-08 00:55:15.582380 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-08 00:55:15.582387 | orchestrator | Monday 08 September 2025 00:44:25 +0000 (0:00:01.119) 0:00:34.679 ****** 2025-09-08 00:55:15.582395 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.582402 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.582410 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.582417 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.582424 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.582432 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.582439 | orchestrator | 2025-09-08 00:55:15.582447 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:55:15.582454 | orchestrator | Monday 08 September 2025 00:44:26 +0000 (0:00:00.660) 0:00:35.340 ****** 2025-09-08 00:55:15.582477 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.582485 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.582492 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.582499 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.582506 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.582513 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.582520 | orchestrator | 2025-09-08 00:55:15.582527 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:55:15.582538 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:01.015) 0:00:36.355 ****** 2025-09-08 00:55:15.582546 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.582553 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.582560 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.582567 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.582574 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.582581 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.582607 | orchestrator | 2025-09-08 00:55:15.582615 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:55:15.582622 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.504) 0:00:36.860 ****** 2025-09-08 00:55:15.582629 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.582636 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.582643 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.582650 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.582658 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.582665 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.582672 | orchestrator | 2025-09-08 00:55:15.582679 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:55:15.582686 | orchestrator | Monday 08 September 2025 00:44:28 +0000 (0:00:00.713) 0:00:37.573 ****** 2025-09-08 00:55:15.582693 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.582700 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.582707 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.582723 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.582731 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.582738 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.582745 | orchestrator | 2025-09-08 00:55:15.582752 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-08 00:55:15.582759 | orchestrator | Monday 08 September 2025 00:44:29 +0000 (0:00:00.715) 0:00:38.288 ****** 2025-09-08 00:55:15.582766 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-08 00:55:15.582774 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-08 00:55:15.582781 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-08 00:55:15.582788 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-08 00:55:15.582795 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:55:15.582802 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-08 00:55:15.582809 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-08 00:55:15.582816 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-08 00:55:15.582824 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-08 00:55:15.582831 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-08 00:55:15.582838 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-08 00:55:15.582845 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-08 00:55:15.582852 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-08 00:55:15.582859 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-08 00:55:15.582866 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-08 00:55:15.582873 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-08 00:55:15.582880 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-08 00:55:15.582887 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-08 00:55:15.582894 | orchestrator | 2025-09-08 00:55:15.582902 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-08 00:55:15.582909 | orchestrator | Monday 08 September 2025 00:44:32 +0000 (0:00:03.374) 0:00:41.663 ****** 2025-09-08 00:55:15.582916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:55:15.582923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:55:15.582930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:55:15.582937 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-08 00:55:15.582944 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-08 00:55:15.582953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-08 00:55:15.582962 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.582970 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-08 00:55:15.582978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-08 00:55:15.582986 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-08 00:55:15.583018 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.583028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:55:15.583037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:55:15.583045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:55:15.583054 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.583063 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-08 00:55:15.583071 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-08 00:55:15.583079 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-08 00:55:15.583088 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.583096 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.583105 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-08 00:55:15.583119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-08 00:55:15.583127 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-08 00:55:15.583136 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.583144 | orchestrator | 2025-09-08 00:55:15.583153 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-08 00:55:15.583161 | orchestrator | Monday 08 September 2025 00:44:33 +0000 (0:00:00.673) 0:00:42.336 ****** 2025-09-08 00:55:15.583170 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.583178 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.583186 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.583209 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.583218 | orchestrator | 2025-09-08 00:55:15.583227 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-08 00:55:15.583236 | orchestrator | Monday 08 September 2025 00:44:34 +0000 (0:00:01.358) 0:00:43.695 ****** 2025-09-08 00:55:15.583245 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.583253 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.583262 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.583270 | orchestrator | 2025-09-08 00:55:15.583278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-08 00:55:15.583287 | orchestrator | Monday 08 September 2025 00:44:35 +0000 (0:00:00.407) 0:00:44.103 ****** 2025-09-08 00:55:15.583295 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.583304 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.583311 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.583318 | orchestrator | 2025-09-08 00:55:15.583325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-08 00:55:15.583332 | orchestrator | Monday 08 September 2025 00:44:35 +0000 (0:00:00.475) 0:00:44.578 ****** 2025-09-08 00:55:15.583339 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.583346 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.583353 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.583360 | orchestrator | 2025-09-08 00:55:15.583368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-08 00:55:15.583375 | orchestrator | Monday 08 September 2025 00:44:36 +0000 (0:00:00.762) 0:00:45.340 ****** 2025-09-08 00:55:15.583382 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.583389 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.583396 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.583403 | orchestrator | 2025-09-08 00:55:15.583411 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-08 00:55:15.583418 | orchestrator | Monday 08 September 2025 00:44:36 +0000 (0:00:00.669) 0:00:46.009 ****** 2025-09-08 00:55:15.583425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.583432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.583439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.583446 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.583453 | orchestrator | 2025-09-08 00:55:15.583475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-08 00:55:15.583482 | orchestrator | Monday 08 September 2025 00:44:37 +0000 (0:00:00.445) 0:00:46.455 ****** 2025-09-08 00:55:15.583489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.583496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.583503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.583511 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.583518 | orchestrator | 2025-09-08 00:55:15.583525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-08 00:55:15.583532 | orchestrator | Monday 08 September 2025 00:44:37 +0000 (0:00:00.545) 0:00:47.000 ****** 2025-09-08 00:55:15.583544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.583551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.583559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.583566 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.583573 | orchestrator | 2025-09-08 00:55:15.583580 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-08 00:55:15.583587 | orchestrator | Monday 08 September 2025 00:44:38 +0000 (0:00:00.447) 0:00:47.448 ****** 2025-09-08 00:55:15.583594 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.583601 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.583608 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.583616 | orchestrator | 2025-09-08 00:55:15.583623 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-08 00:55:15.583630 | orchestrator | Monday 08 September 2025 00:44:38 +0000 (0:00:00.636) 0:00:48.084 ****** 2025-09-08 00:55:15.583637 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:55:15.583644 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-08 00:55:15.583651 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-08 00:55:15.583658 | orchestrator | 2025-09-08 00:55:15.583686 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-08 00:55:15.583695 | orchestrator | Monday 08 September 2025 00:44:40 +0000 (0:00:01.803) 0:00:49.888 ****** 2025-09-08 00:55:15.583702 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:55:15.583710 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:55:15.583717 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:55:15.583724 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-08 00:55:15.583731 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:55:15.583738 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:55:15.583745 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:55:15.583752 | orchestrator | 2025-09-08 00:55:15.583759 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-08 00:55:15.583767 | orchestrator | Monday 08 September 2025 00:44:41 +0000 (0:00:01.094) 0:00:50.982 ****** 2025-09-08 00:55:15.583774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:55:15.583781 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:55:15.583791 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:55:15.583799 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-08 00:55:15.583806 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:55:15.583813 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:55:15.583820 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:55:15.583828 | orchestrator | 2025-09-08 00:55:15.583835 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:55:15.583842 | orchestrator | Monday 08 September 2025 00:44:44 +0000 (0:00:02.743) 0:00:53.726 ****** 2025-09-08 00:55:15.583849 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.583857 | orchestrator | 2025-09-08 00:55:15.583864 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:55:15.583871 | orchestrator | Monday 08 September 2025 00:44:46 +0000 (0:00:01.707) 0:00:55.434 ****** 2025-09-08 00:55:15.583884 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.583891 | orchestrator | 2025-09-08 00:55:15.583899 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:55:15.583906 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:01.455) 0:00:56.889 ****** 2025-09-08 00:55:15.583913 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.583920 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.583927 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.583934 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.583941 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.583949 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.583956 | orchestrator | 2025-09-08 00:55:15.583963 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:55:15.583970 | orchestrator | Monday 08 September 2025 00:44:50 +0000 (0:00:02.951) 0:00:59.841 ****** 2025-09-08 00:55:15.583977 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.583984 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.583991 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.583998 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584005 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584013 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584020 | orchestrator | 2025-09-08 00:55:15.584027 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:55:15.584034 | orchestrator | Monday 08 September 2025 00:44:52 +0000 (0:00:01.510) 0:01:01.352 ****** 2025-09-08 00:55:15.584041 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584048 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584055 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.584062 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584069 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584076 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584083 | orchestrator | 2025-09-08 00:55:15.584091 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:55:15.584098 | orchestrator | Monday 08 September 2025 00:44:53 +0000 (0:00:00.943) 0:01:02.295 ****** 2025-09-08 00:55:15.584105 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.584112 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584119 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584126 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584133 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584140 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584147 | orchestrator | 2025-09-08 00:55:15.584154 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:55:15.584161 | orchestrator | Monday 08 September 2025 00:44:54 +0000 (0:00:01.215) 0:01:03.511 ****** 2025-09-08 00:55:15.584169 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.584176 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.584183 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.584190 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.584197 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.584204 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.584211 | orchestrator | 2025-09-08 00:55:15.584218 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:55:15.584244 | orchestrator | Monday 08 September 2025 00:44:57 +0000 (0:00:02.713) 0:01:06.225 ****** 2025-09-08 00:55:15.584253 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.584260 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.584267 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.584274 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584281 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584288 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584295 | orchestrator | 2025-09-08 00:55:15.584307 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:55:15.584315 | orchestrator | Monday 08 September 2025 00:44:58 +0000 (0:00:01.660) 0:01:07.886 ****** 2025-09-08 00:55:15.584322 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.584329 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.584336 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.584343 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584350 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584357 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584364 | orchestrator | 2025-09-08 00:55:15.584371 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:55:15.584378 | orchestrator | Monday 08 September 2025 00:45:00 +0000 (0:00:01.541) 0:01:09.428 ****** 2025-09-08 00:55:15.584386 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.584393 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584400 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584407 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.584414 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.584421 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.584428 | orchestrator | 2025-09-08 00:55:15.584439 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:55:15.584447 | orchestrator | Monday 08 September 2025 00:45:02 +0000 (0:00:01.689) 0:01:11.118 ****** 2025-09-08 00:55:15.584454 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.584475 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584483 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584490 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.584497 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.584504 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.584512 | orchestrator | 2025-09-08 00:55:15.584519 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:55:15.584526 | orchestrator | Monday 08 September 2025 00:45:03 +0000 (0:00:01.642) 0:01:12.760 ****** 2025-09-08 00:55:15.584533 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.584541 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.584548 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.584555 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584562 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584570 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584577 | orchestrator | 2025-09-08 00:55:15.584584 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:55:15.584592 | orchestrator | Monday 08 September 2025 00:45:04 +0000 (0:00:01.134) 0:01:13.895 ****** 2025-09-08 00:55:15.584599 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.584606 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.584613 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.584620 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.584628 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.584635 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.584642 | orchestrator | 2025-09-08 00:55:15.584649 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:55:15.584657 | orchestrator | Monday 08 September 2025 00:45:05 +0000 (0:00:00.836) 0:01:14.732 ****** 2025-09-08 00:55:15.584664 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.584671 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584678 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584685 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584693 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584700 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584707 | orchestrator | 2025-09-08 00:55:15.584714 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:55:15.584722 | orchestrator | Monday 08 September 2025 00:45:07 +0000 (0:00:01.383) 0:01:16.115 ****** 2025-09-08 00:55:15.584729 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.584741 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584748 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584755 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584762 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584769 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584777 | orchestrator | 2025-09-08 00:55:15.584784 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:55:15.584791 | orchestrator | Monday 08 September 2025 00:45:07 +0000 (0:00:00.732) 0:01:16.848 ****** 2025-09-08 00:55:15.584798 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.584806 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.584813 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.584820 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584827 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584835 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584842 | orchestrator | 2025-09-08 00:55:15.584849 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:55:15.584856 | orchestrator | Monday 08 September 2025 00:45:08 +0000 (0:00:00.923) 0:01:17.771 ****** 2025-09-08 00:55:15.584864 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.584871 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.584878 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.584885 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584892 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584899 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584907 | orchestrator | 2025-09-08 00:55:15.584914 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:55:15.584921 | orchestrator | Monday 08 September 2025 00:45:09 +0000 (0:00:00.620) 0:01:18.392 ****** 2025-09-08 00:55:15.584928 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.584935 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.584943 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.584950 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.584957 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.584964 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.584972 | orchestrator | 2025-09-08 00:55:15.584999 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:55:15.585007 | orchestrator | Monday 08 September 2025 00:45:10 +0000 (0:00:00.856) 0:01:19.249 ****** 2025-09-08 00:55:15.585014 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.585021 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.585028 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.585036 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.585043 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.585050 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.585057 | orchestrator | 2025-09-08 00:55:15.585064 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:55:15.585071 | orchestrator | Monday 08 September 2025 00:45:10 +0000 (0:00:00.659) 0:01:19.908 ****** 2025-09-08 00:55:15.585078 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.585085 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.585092 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.585099 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.585106 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.585114 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.585121 | orchestrator | 2025-09-08 00:55:15.585128 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:55:15.585135 | orchestrator | Monday 08 September 2025 00:45:12 +0000 (0:00:01.188) 0:01:21.097 ****** 2025-09-08 00:55:15.585142 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.585149 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.585156 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.585163 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.585170 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.585180 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.585192 | orchestrator | 2025-09-08 00:55:15.585200 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-08 00:55:15.585207 | orchestrator | Monday 08 September 2025 00:45:13 +0000 (0:00:01.179) 0:01:22.276 ****** 2025-09-08 00:55:15.585214 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.585221 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.585228 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.585235 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.585242 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.585249 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.585256 | orchestrator | 2025-09-08 00:55:15.585263 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-08 00:55:15.585271 | orchestrator | Monday 08 September 2025 00:45:14 +0000 (0:00:01.602) 0:01:23.878 ****** 2025-09-08 00:55:15.585278 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.585285 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.585292 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.585299 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.585306 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.585313 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.585320 | orchestrator | 2025-09-08 00:55:15.585327 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-08 00:55:15.585334 | orchestrator | Monday 08 September 2025 00:45:17 +0000 (0:00:02.448) 0:01:26.326 ****** 2025-09-08 00:55:15.585341 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.585348 | orchestrator | 2025-09-08 00:55:15.585355 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-08 00:55:15.585362 | orchestrator | Monday 08 September 2025 00:45:18 +0000 (0:00:01.417) 0:01:27.744 ****** 2025-09-08 00:55:15.585369 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.585376 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.585383 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.585390 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.585397 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.585404 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.585411 | orchestrator | 2025-09-08 00:55:15.585419 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-08 00:55:15.585426 | orchestrator | Monday 08 September 2025 00:45:19 +0000 (0:00:00.779) 0:01:28.523 ****** 2025-09-08 00:55:15.585433 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.585440 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.585447 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.585454 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.585475 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.585482 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.585489 | orchestrator | 2025-09-08 00:55:15.585496 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-08 00:55:15.585503 | orchestrator | Monday 08 September 2025 00:45:20 +0000 (0:00:00.855) 0:01:29.379 ****** 2025-09-08 00:55:15.585510 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:55:15.585518 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:55:15.585525 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:55:15.585532 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:55:15.585539 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:55:15.585546 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:55:15.585553 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:55:15.585565 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:55:15.585572 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:55:15.585579 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:55:15.585586 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:55:15.585613 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:55:15.585622 | orchestrator | 2025-09-08 00:55:15.585629 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-08 00:55:15.585636 | orchestrator | Monday 08 September 2025 00:45:21 +0000 (0:00:01.356) 0:01:30.735 ****** 2025-09-08 00:55:15.585643 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.585651 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.585658 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.585665 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.585672 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.585679 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.585686 | orchestrator | 2025-09-08 00:55:15.585693 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-08 00:55:15.585700 | orchestrator | Monday 08 September 2025 00:45:22 +0000 (0:00:01.240) 0:01:31.976 ****** 2025-09-08 00:55:15.585707 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.585715 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.585722 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.585729 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.585736 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.585743 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.585750 | orchestrator | 2025-09-08 00:55:15.585757 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-08 00:55:15.585764 | orchestrator | Monday 08 September 2025 00:45:23 +0000 (0:00:00.836) 0:01:32.812 ****** 2025-09-08 00:55:15.585771 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.585782 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.585789 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.585796 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.585803 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.585810 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.585817 | orchestrator | 2025-09-08 00:55:15.585825 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-08 00:55:15.585832 | orchestrator | Monday 08 September 2025 00:45:24 +0000 (0:00:00.789) 0:01:33.601 ****** 2025-09-08 00:55:15.585839 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.585846 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.585853 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.585860 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.585868 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.585875 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.585882 | orchestrator | 2025-09-08 00:55:15.585889 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-08 00:55:15.585896 | orchestrator | Monday 08 September 2025 00:45:25 +0000 (0:00:00.600) 0:01:34.202 ****** 2025-09-08 00:55:15.585904 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.585911 | orchestrator | 2025-09-08 00:55:15.585918 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-08 00:55:15.585925 | orchestrator | Monday 08 September 2025 00:45:26 +0000 (0:00:01.290) 0:01:35.492 ****** 2025-09-08 00:55:15.585932 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.585939 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.585946 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.585958 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.585965 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.585972 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.585979 | orchestrator | 2025-09-08 00:55:15.585987 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-08 00:55:15.585994 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:01:04.652) 0:02:40.145 ****** 2025-09-08 00:55:15.586001 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:55:15.586008 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:55:15.586034 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:55:15.586041 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586050 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:55:15.586057 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:55:15.586065 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:55:15.586072 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586079 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:55:15.586086 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:55:15.586093 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:55:15.586101 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586108 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:55:15.586115 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:55:15.586122 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:55:15.586130 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586137 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:55:15.586144 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:55:15.586151 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:55:15.586158 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586166 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:55:15.586193 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:55:15.586202 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:55:15.586209 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586217 | orchestrator | 2025-09-08 00:55:15.586224 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-08 00:55:15.586231 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:00:00.763) 0:02:40.908 ****** 2025-09-08 00:55:15.586238 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586245 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586252 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586259 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586266 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586273 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586281 | orchestrator | 2025-09-08 00:55:15.586288 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-08 00:55:15.586295 | orchestrator | Monday 08 September 2025 00:46:32 +0000 (0:00:00.802) 0:02:41.711 ****** 2025-09-08 00:55:15.586302 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586309 | orchestrator | 2025-09-08 00:55:15.586316 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-08 00:55:15.586323 | orchestrator | Monday 08 September 2025 00:46:32 +0000 (0:00:00.146) 0:02:41.857 ****** 2025-09-08 00:55:15.586335 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586343 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586350 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586357 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586370 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586378 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586385 | orchestrator | 2025-09-08 00:55:15.586392 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-08 00:55:15.586399 | orchestrator | Monday 08 September 2025 00:46:33 +0000 (0:00:00.583) 0:02:42.441 ****** 2025-09-08 00:55:15.586406 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586413 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586420 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586427 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586434 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586442 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586449 | orchestrator | 2025-09-08 00:55:15.586494 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-08 00:55:15.586503 | orchestrator | Monday 08 September 2025 00:46:34 +0000 (0:00:00.973) 0:02:43.414 ****** 2025-09-08 00:55:15.586510 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586517 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586525 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586532 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586539 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586546 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586553 | orchestrator | 2025-09-08 00:55:15.586560 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-08 00:55:15.586568 | orchestrator | Monday 08 September 2025 00:46:35 +0000 (0:00:00.747) 0:02:44.161 ****** 2025-09-08 00:55:15.586575 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.586582 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.586589 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.586596 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.586603 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.586610 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.586617 | orchestrator | 2025-09-08 00:55:15.586625 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-08 00:55:15.586632 | orchestrator | Monday 08 September 2025 00:46:38 +0000 (0:00:03.194) 0:02:47.355 ****** 2025-09-08 00:55:15.586639 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.586646 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.586653 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.586660 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.586667 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.586674 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.586682 | orchestrator | 2025-09-08 00:55:15.586689 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-08 00:55:15.586696 | orchestrator | Monday 08 September 2025 00:46:39 +0000 (0:00:00.745) 0:02:48.101 ****** 2025-09-08 00:55:15.586704 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.586711 | orchestrator | 2025-09-08 00:55:15.586718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-08 00:55:15.586726 | orchestrator | Monday 08 September 2025 00:46:40 +0000 (0:00:01.414) 0:02:49.516 ****** 2025-09-08 00:55:15.586733 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586740 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586747 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586754 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586761 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586768 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586776 | orchestrator | 2025-09-08 00:55:15.586788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-08 00:55:15.586795 | orchestrator | Monday 08 September 2025 00:46:41 +0000 (0:00:01.034) 0:02:50.550 ****** 2025-09-08 00:55:15.586803 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586810 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586817 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586824 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586831 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586838 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586846 | orchestrator | 2025-09-08 00:55:15.586852 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-08 00:55:15.586859 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:00.807) 0:02:51.358 ****** 2025-09-08 00:55:15.586865 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586872 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586878 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586885 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586891 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586918 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586926 | orchestrator | 2025-09-08 00:55:15.586932 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-08 00:55:15.586939 | orchestrator | Monday 08 September 2025 00:46:43 +0000 (0:00:00.750) 0:02:52.109 ****** 2025-09-08 00:55:15.586946 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.586952 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.586959 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.586965 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.586972 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.586979 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.586985 | orchestrator | 2025-09-08 00:55:15.586992 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-08 00:55:15.586998 | orchestrator | Monday 08 September 2025 00:46:44 +0000 (0:00:01.171) 0:02:53.280 ****** 2025-09-08 00:55:15.587005 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.587011 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.587018 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.587024 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.587031 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.587037 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.587044 | orchestrator | 2025-09-08 00:55:15.587050 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-08 00:55:15.587057 | orchestrator | Monday 08 September 2025 00:46:44 +0000 (0:00:00.722) 0:02:54.002 ****** 2025-09-08 00:55:15.587064 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.587070 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.587081 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.587087 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.587094 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.587100 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.587107 | orchestrator | 2025-09-08 00:55:15.587113 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-08 00:55:15.587120 | orchestrator | Monday 08 September 2025 00:46:46 +0000 (0:00:01.114) 0:02:55.117 ****** 2025-09-08 00:55:15.587127 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.587133 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.587140 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.587146 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.587153 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.587159 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.587166 | orchestrator | 2025-09-08 00:55:15.587172 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-08 00:55:15.587179 | orchestrator | Monday 08 September 2025 00:46:46 +0000 (0:00:00.846) 0:02:55.964 ****** 2025-09-08 00:55:15.587190 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.587197 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.587203 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.587210 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.587216 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.587223 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.587229 | orchestrator | 2025-09-08 00:55:15.587236 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-08 00:55:15.587243 | orchestrator | Monday 08 September 2025 00:46:47 +0000 (0:00:00.755) 0:02:56.719 ****** 2025-09-08 00:55:15.587249 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.587256 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.587263 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.587269 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.587276 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.587282 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.587289 | orchestrator | 2025-09-08 00:55:15.587295 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-08 00:55:15.587302 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:01.151) 0:02:57.871 ****** 2025-09-08 00:55:15.587309 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.587315 | orchestrator | 2025-09-08 00:55:15.587322 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-08 00:55:15.587329 | orchestrator | Monday 08 September 2025 00:46:49 +0000 (0:00:01.023) 0:02:58.894 ****** 2025-09-08 00:55:15.587335 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-08 00:55:15.587342 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-08 00:55:15.587348 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-08 00:55:15.587355 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-08 00:55:15.587361 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-08 00:55:15.587368 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-08 00:55:15.587374 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-08 00:55:15.587381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-08 00:55:15.587388 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-08 00:55:15.587394 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-08 00:55:15.587401 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-08 00:55:15.587407 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-08 00:55:15.587414 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-08 00:55:15.587420 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-08 00:55:15.587427 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-08 00:55:15.587433 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-08 00:55:15.587440 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-08 00:55:15.587446 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-08 00:55:15.587453 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-08 00:55:15.587473 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-08 00:55:15.587498 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-08 00:55:15.587506 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-08 00:55:15.587513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-08 00:55:15.587519 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-08 00:55:15.587526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-08 00:55:15.587532 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-08 00:55:15.587546 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-08 00:55:15.587553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-08 00:55:15.587559 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-08 00:55:15.587566 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-08 00:55:15.587572 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-08 00:55:15.587579 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-08 00:55:15.587585 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-08 00:55:15.587592 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-08 00:55:15.587599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-08 00:55:15.587605 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-08 00:55:15.587615 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-08 00:55:15.587622 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-08 00:55:15.587628 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-08 00:55:15.587635 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:55:15.587641 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-08 00:55:15.587648 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-08 00:55:15.587654 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-08 00:55:15.587661 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:55:15.587667 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:55:15.587674 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:55:15.587681 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:55:15.587687 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:55:15.587694 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:55:15.587700 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:55:15.587707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:55:15.587713 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:55:15.587720 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:55:15.587726 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:55:15.587733 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:55:15.587739 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:55:15.587746 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:55:15.587753 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:55:15.587759 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:55:15.587766 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:55:15.587772 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:55:15.587779 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:55:15.587785 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:55:15.587792 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:55:15.587798 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:55:15.587805 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:55:15.587811 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:55:15.587823 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:55:15.587829 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:55:15.587836 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:55:15.587842 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:55:15.587849 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:55:15.587855 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:55:15.587862 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:55:15.587868 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:55:15.587875 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:55:15.587882 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:55:15.587888 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:55:15.587913 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:55:15.587921 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:55:15.587928 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-08 00:55:15.587934 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:55:15.587941 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:55:15.587947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:55:15.587954 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:55:15.587961 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-08 00:55:15.587967 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-08 00:55:15.587974 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-08 00:55:15.587980 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-08 00:55:15.587987 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-08 00:55:15.587993 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-08 00:55:15.588000 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-08 00:55:15.588007 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-08 00:55:15.588017 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-08 00:55:15.588024 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-08 00:55:15.588031 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-08 00:55:15.588038 | orchestrator | 2025-09-08 00:55:15.588044 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-08 00:55:15.588051 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:05.971) 0:03:04.865 ****** 2025-09-08 00:55:15.588057 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588064 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588071 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588077 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.588084 | orchestrator | 2025-09-08 00:55:15.588090 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-08 00:55:15.588097 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:01.112) 0:03:05.978 ****** 2025-09-08 00:55:15.588104 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588111 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588122 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588129 | orchestrator | 2025-09-08 00:55:15.588136 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-08 00:55:15.588142 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:01.090) 0:03:07.069 ****** 2025-09-08 00:55:15.588149 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588155 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588162 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588169 | orchestrator | 2025-09-08 00:55:15.588176 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-08 00:55:15.588182 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:01.280) 0:03:08.350 ****** 2025-09-08 00:55:15.588189 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.588195 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.588202 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.588209 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588215 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588222 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588228 | orchestrator | 2025-09-08 00:55:15.588235 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-08 00:55:15.588242 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.567) 0:03:08.917 ****** 2025-09-08 00:55:15.588248 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.588255 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.588261 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.588268 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588274 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588281 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588287 | orchestrator | 2025-09-08 00:55:15.588294 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-08 00:55:15.588301 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:00.774) 0:03:09.691 ****** 2025-09-08 00:55:15.588307 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.588314 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.588320 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.588327 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588333 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588340 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588346 | orchestrator | 2025-09-08 00:55:15.588353 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-08 00:55:15.588359 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.644) 0:03:10.336 ****** 2025-09-08 00:55:15.588383 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.588391 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.588398 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.588404 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588411 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588417 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588424 | orchestrator | 2025-09-08 00:55:15.588430 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-08 00:55:15.588437 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.580) 0:03:10.917 ****** 2025-09-08 00:55:15.588443 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.588450 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.588487 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.588495 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588501 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588515 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588521 | orchestrator | 2025-09-08 00:55:15.588528 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-08 00:55:15.588535 | orchestrator | Monday 08 September 2025 00:47:02 +0000 (0:00:01.084) 0:03:12.001 ****** 2025-09-08 00:55:15.588541 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.588548 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.588554 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.588561 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588567 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588574 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588580 | orchestrator | 2025-09-08 00:55:15.588591 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-08 00:55:15.588598 | orchestrator | Monday 08 September 2025 00:47:03 +0000 (0:00:00.699) 0:03:12.701 ****** 2025-09-08 00:55:15.588605 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.588611 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.588618 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.588624 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588631 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588638 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588644 | orchestrator | 2025-09-08 00:55:15.588651 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-08 00:55:15.588657 | orchestrator | Monday 08 September 2025 00:47:04 +0000 (0:00:00.940) 0:03:13.641 ****** 2025-09-08 00:55:15.588664 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.588671 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.588677 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.588684 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588690 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588697 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588703 | orchestrator | 2025-09-08 00:55:15.588710 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-08 00:55:15.588716 | orchestrator | Monday 08 September 2025 00:47:05 +0000 (0:00:00.772) 0:03:14.414 ****** 2025-09-08 00:55:15.588723 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588729 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588735 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588741 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.588748 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.588754 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.588760 | orchestrator | 2025-09-08 00:55:15.588766 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-08 00:55:15.588772 | orchestrator | Monday 08 September 2025 00:47:08 +0000 (0:00:03.143) 0:03:17.558 ****** 2025-09-08 00:55:15.588778 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.588784 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.588791 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.588797 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588803 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588809 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588815 | orchestrator | 2025-09-08 00:55:15.588821 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-08 00:55:15.588827 | orchestrator | Monday 08 September 2025 00:47:09 +0000 (0:00:01.203) 0:03:18.762 ****** 2025-09-08 00:55:15.588833 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.588839 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.588845 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.588851 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588857 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588864 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588870 | orchestrator | 2025-09-08 00:55:15.588876 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-08 00:55:15.588886 | orchestrator | Monday 08 September 2025 00:47:10 +0000 (0:00:01.297) 0:03:20.060 ****** 2025-09-08 00:55:15.588893 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.588899 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.588905 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.588911 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588917 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588923 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588929 | orchestrator | 2025-09-08 00:55:15.588935 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-08 00:55:15.588941 | orchestrator | Monday 08 September 2025 00:47:11 +0000 (0:00:00.635) 0:03:20.695 ****** 2025-09-08 00:55:15.588948 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588954 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588960 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.588966 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.588972 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.588979 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.588985 | orchestrator | 2025-09-08 00:55:15.589008 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-08 00:55:15.589015 | orchestrator | Monday 08 September 2025 00:47:12 +0000 (0:00:00.885) 0:03:21.581 ****** 2025-09-08 00:55:15.589023 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-08 00:55:15.589031 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-08 00:55:15.589042 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-08 00:55:15.589050 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-08 00:55:15.589056 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.589062 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-08 00:55:15.589068 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589075 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-08 00:55:15.589081 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589087 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.589097 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589103 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589109 | orchestrator | 2025-09-08 00:55:15.589115 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-08 00:55:15.589121 | orchestrator | Monday 08 September 2025 00:47:14 +0000 (0:00:01.713) 0:03:23.294 ****** 2025-09-08 00:55:15.589127 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589133 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.589139 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.589145 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589151 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589157 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589163 | orchestrator | 2025-09-08 00:55:15.589170 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-08 00:55:15.589176 | orchestrator | Monday 08 September 2025 00:47:15 +0000 (0:00:00.934) 0:03:24.229 ****** 2025-09-08 00:55:15.589182 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589188 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.589194 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.589200 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589206 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589212 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589218 | orchestrator | 2025-09-08 00:55:15.589224 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-08 00:55:15.589230 | orchestrator | Monday 08 September 2025 00:47:15 +0000 (0:00:00.647) 0:03:24.876 ****** 2025-09-08 00:55:15.589236 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589242 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.589248 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.589254 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589260 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589266 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589272 | orchestrator | 2025-09-08 00:55:15.589279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-08 00:55:15.589285 | orchestrator | Monday 08 September 2025 00:47:16 +0000 (0:00:00.881) 0:03:25.758 ****** 2025-09-08 00:55:15.589291 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589297 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.589303 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589309 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.589315 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589321 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589327 | orchestrator | 2025-09-08 00:55:15.589333 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-08 00:55:15.589339 | orchestrator | Monday 08 September 2025 00:47:18 +0000 (0:00:01.432) 0:03:27.190 ****** 2025-09-08 00:55:15.589345 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589366 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.589373 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.589380 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589386 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589392 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589398 | orchestrator | 2025-09-08 00:55:15.589404 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-08 00:55:15.589410 | orchestrator | Monday 08 September 2025 00:47:19 +0000 (0:00:00.956) 0:03:28.146 ****** 2025-09-08 00:55:15.589416 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.589422 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.589428 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589434 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.589440 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589447 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589470 | orchestrator | 2025-09-08 00:55:15.589476 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-08 00:55:15.589482 | orchestrator | Monday 08 September 2025 00:47:19 +0000 (0:00:00.893) 0:03:29.039 ****** 2025-09-08 00:55:15.589488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.589494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.589500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.589507 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589513 | orchestrator | 2025-09-08 00:55:15.589519 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-08 00:55:15.589530 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:00.540) 0:03:29.580 ****** 2025-09-08 00:55:15.589537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.589543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.589549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.589555 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589561 | orchestrator | 2025-09-08 00:55:15.589567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-08 00:55:15.589574 | orchestrator | Monday 08 September 2025 00:47:21 +0000 (0:00:00.557) 0:03:30.138 ****** 2025-09-08 00:55:15.589580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.589586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.589592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.589598 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589604 | orchestrator | 2025-09-08 00:55:15.589610 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-08 00:55:15.589616 | orchestrator | Monday 08 September 2025 00:47:21 +0000 (0:00:00.771) 0:03:30.909 ****** 2025-09-08 00:55:15.589622 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.589628 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.589635 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.589641 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589647 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589653 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589659 | orchestrator | 2025-09-08 00:55:15.589665 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-08 00:55:15.589671 | orchestrator | Monday 08 September 2025 00:47:22 +0000 (0:00:00.702) 0:03:31.612 ****** 2025-09-08 00:55:15.589677 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:55:15.589683 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-08 00:55:15.589689 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-08 00:55:15.589695 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-08 00:55:15.589701 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.589708 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-08 00:55:15.589714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.589720 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-08 00:55:15.589726 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.589732 | orchestrator | 2025-09-08 00:55:15.589738 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-08 00:55:15.589744 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:01.788) 0:03:33.400 ****** 2025-09-08 00:55:15.589750 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.589757 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.589763 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.589769 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.589775 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.589781 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.589787 | orchestrator | 2025-09-08 00:55:15.589793 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:55:15.589803 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:03.003) 0:03:36.403 ****** 2025-09-08 00:55:15.589809 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.589815 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.589821 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.589827 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.589834 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.589840 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.589846 | orchestrator | 2025-09-08 00:55:15.589852 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-08 00:55:15.589858 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:01.781) 0:03:38.185 ****** 2025-09-08 00:55:15.589864 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.589870 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.589876 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.589883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.589889 | orchestrator | 2025-09-08 00:55:15.589895 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-08 00:55:15.589901 | orchestrator | Monday 08 September 2025 00:47:30 +0000 (0:00:01.184) 0:03:39.370 ****** 2025-09-08 00:55:15.589907 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.589913 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.589919 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.589925 | orchestrator | 2025-09-08 00:55:15.589949 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-08 00:55:15.589956 | orchestrator | Monday 08 September 2025 00:47:30 +0000 (0:00:00.355) 0:03:39.726 ****** 2025-09-08 00:55:15.589962 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.589969 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.589974 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.589981 | orchestrator | 2025-09-08 00:55:15.589987 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-08 00:55:15.589993 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:01.545) 0:03:41.271 ****** 2025-09-08 00:55:15.589999 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:55:15.590005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:55:15.590011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:55:15.590045 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.590052 | orchestrator | 2025-09-08 00:55:15.590058 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-08 00:55:15.590065 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:00.760) 0:03:42.032 ****** 2025-09-08 00:55:15.590071 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.590077 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.590084 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.590090 | orchestrator | 2025-09-08 00:55:15.590096 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-08 00:55:15.590106 | orchestrator | Monday 08 September 2025 00:47:33 +0000 (0:00:00.524) 0:03:42.556 ****** 2025-09-08 00:55:15.590112 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.590118 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.590124 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.590130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.590136 | orchestrator | 2025-09-08 00:55:15.590142 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-08 00:55:15.590149 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:01.537) 0:03:44.094 ****** 2025-09-08 00:55:15.590155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.590161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.590171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.590177 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590183 | orchestrator | 2025-09-08 00:55:15.590189 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-08 00:55:15.590196 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.380) 0:03:44.475 ****** 2025-09-08 00:55:15.590202 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590208 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.590214 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.590220 | orchestrator | 2025-09-08 00:55:15.590226 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-08 00:55:15.590232 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.434) 0:03:44.910 ****** 2025-09-08 00:55:15.590238 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590244 | orchestrator | 2025-09-08 00:55:15.590250 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-08 00:55:15.590256 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.188) 0:03:45.098 ****** 2025-09-08 00:55:15.590262 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590268 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.590274 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.590280 | orchestrator | 2025-09-08 00:55:15.590287 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-08 00:55:15.590293 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.359) 0:03:45.457 ****** 2025-09-08 00:55:15.590299 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590305 | orchestrator | 2025-09-08 00:55:15.590311 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-08 00:55:15.590317 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.208) 0:03:45.665 ****** 2025-09-08 00:55:15.590323 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590329 | orchestrator | 2025-09-08 00:55:15.590335 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-08 00:55:15.590341 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.169) 0:03:45.835 ****** 2025-09-08 00:55:15.590347 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590353 | orchestrator | 2025-09-08 00:55:15.590359 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-08 00:55:15.590366 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.100) 0:03:45.936 ****** 2025-09-08 00:55:15.590372 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590378 | orchestrator | 2025-09-08 00:55:15.590384 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-08 00:55:15.590390 | orchestrator | Monday 08 September 2025 00:47:37 +0000 (0:00:00.185) 0:03:46.121 ****** 2025-09-08 00:55:15.590396 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590402 | orchestrator | 2025-09-08 00:55:15.590408 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-08 00:55:15.590414 | orchestrator | Monday 08 September 2025 00:47:37 +0000 (0:00:00.176) 0:03:46.298 ****** 2025-09-08 00:55:15.590420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.590426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.590432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.590438 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590444 | orchestrator | 2025-09-08 00:55:15.590451 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-08 00:55:15.590469 | orchestrator | Monday 08 September 2025 00:47:37 +0000 (0:00:00.514) 0:03:46.812 ****** 2025-09-08 00:55:15.590475 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590499 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.590506 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.590513 | orchestrator | 2025-09-08 00:55:15.590519 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-08 00:55:15.590529 | orchestrator | Monday 08 September 2025 00:47:38 +0000 (0:00:00.428) 0:03:47.241 ****** 2025-09-08 00:55:15.590535 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590541 | orchestrator | 2025-09-08 00:55:15.590548 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-08 00:55:15.590554 | orchestrator | Monday 08 September 2025 00:47:38 +0000 (0:00:00.200) 0:03:47.442 ****** 2025-09-08 00:55:15.590560 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590566 | orchestrator | 2025-09-08 00:55:15.590572 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-08 00:55:15.590578 | orchestrator | Monday 08 September 2025 00:47:38 +0000 (0:00:00.210) 0:03:47.652 ****** 2025-09-08 00:55:15.590584 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.590590 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.590596 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.590602 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.590608 | orchestrator | 2025-09-08 00:55:15.590615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-08 00:55:15.590621 | orchestrator | Monday 08 September 2025 00:47:39 +0000 (0:00:00.856) 0:03:48.508 ****** 2025-09-08 00:55:15.590627 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.590636 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.590642 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.590648 | orchestrator | 2025-09-08 00:55:15.590655 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-08 00:55:15.590661 | orchestrator | Monday 08 September 2025 00:47:39 +0000 (0:00:00.326) 0:03:48.835 ****** 2025-09-08 00:55:15.590667 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.590673 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.590679 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.590685 | orchestrator | 2025-09-08 00:55:15.590691 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-08 00:55:15.590698 | orchestrator | Monday 08 September 2025 00:47:40 +0000 (0:00:01.194) 0:03:50.029 ****** 2025-09-08 00:55:15.590704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.590710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.590716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.590722 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590728 | orchestrator | 2025-09-08 00:55:15.590734 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-08 00:55:15.590740 | orchestrator | Monday 08 September 2025 00:47:41 +0000 (0:00:00.899) 0:03:50.929 ****** 2025-09-08 00:55:15.590746 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.590753 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.590759 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.590765 | orchestrator | 2025-09-08 00:55:15.590771 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-08 00:55:15.590777 | orchestrator | Monday 08 September 2025 00:47:42 +0000 (0:00:00.419) 0:03:51.349 ****** 2025-09-08 00:55:15.590783 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.590789 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.590795 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.590802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.590808 | orchestrator | 2025-09-08 00:55:15.590814 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-08 00:55:15.590820 | orchestrator | Monday 08 September 2025 00:47:43 +0000 (0:00:01.051) 0:03:52.400 ****** 2025-09-08 00:55:15.590826 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.590832 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.590838 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.590851 | orchestrator | 2025-09-08 00:55:15.590858 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-08 00:55:15.590864 | orchestrator | Monday 08 September 2025 00:47:43 +0000 (0:00:00.355) 0:03:52.756 ****** 2025-09-08 00:55:15.590870 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.590876 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.590882 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.590888 | orchestrator | 2025-09-08 00:55:15.590894 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-08 00:55:15.590900 | orchestrator | Monday 08 September 2025 00:47:45 +0000 (0:00:01.804) 0:03:54.561 ****** 2025-09-08 00:55:15.590907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.590913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.590919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.590925 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590931 | orchestrator | 2025-09-08 00:55:15.590937 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-08 00:55:15.590943 | orchestrator | Monday 08 September 2025 00:47:46 +0000 (0:00:00.646) 0:03:55.207 ****** 2025-09-08 00:55:15.590949 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.590955 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.590961 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.590968 | orchestrator | 2025-09-08 00:55:15.590974 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-08 00:55:15.590980 | orchestrator | Monday 08 September 2025 00:47:46 +0000 (0:00:00.378) 0:03:55.586 ****** 2025-09-08 00:55:15.590986 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.590992 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.590998 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.591004 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591010 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591016 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591022 | orchestrator | 2025-09-08 00:55:15.591028 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-08 00:55:15.591050 | orchestrator | Monday 08 September 2025 00:47:47 +0000 (0:00:00.607) 0:03:56.193 ****** 2025-09-08 00:55:15.591058 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.591064 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.591070 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.591076 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.591082 | orchestrator | 2025-09-08 00:55:15.591088 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-08 00:55:15.591094 | orchestrator | Monday 08 September 2025 00:47:48 +0000 (0:00:01.085) 0:03:57.279 ****** 2025-09-08 00:55:15.591100 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591106 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591113 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591119 | orchestrator | 2025-09-08 00:55:15.591125 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-08 00:55:15.591131 | orchestrator | Monday 08 September 2025 00:47:48 +0000 (0:00:00.444) 0:03:57.723 ****** 2025-09-08 00:55:15.591137 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.591143 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.591149 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.591155 | orchestrator | 2025-09-08 00:55:15.591161 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-08 00:55:15.591167 | orchestrator | Monday 08 September 2025 00:47:50 +0000 (0:00:01.407) 0:03:59.130 ****** 2025-09-08 00:55:15.591177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:55:15.591183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:55:15.591189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:55:15.591200 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591206 | orchestrator | 2025-09-08 00:55:15.591212 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-08 00:55:15.591218 | orchestrator | Monday 08 September 2025 00:47:50 +0000 (0:00:00.640) 0:03:59.771 ****** 2025-09-08 00:55:15.591224 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591231 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591237 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591243 | orchestrator | 2025-09-08 00:55:15.591249 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-08 00:55:15.591255 | orchestrator | 2025-09-08 00:55:15.591261 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:55:15.591267 | orchestrator | Monday 08 September 2025 00:47:51 +0000 (0:00:00.584) 0:04:00.355 ****** 2025-09-08 00:55:15.591274 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.591280 | orchestrator | 2025-09-08 00:55:15.591286 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:55:15.591292 | orchestrator | Monday 08 September 2025 00:47:52 +0000 (0:00:00.747) 0:04:01.103 ****** 2025-09-08 00:55:15.591298 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.591304 | orchestrator | 2025-09-08 00:55:15.591310 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:55:15.591316 | orchestrator | Monday 08 September 2025 00:47:52 +0000 (0:00:00.581) 0:04:01.685 ****** 2025-09-08 00:55:15.591322 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591328 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591334 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591340 | orchestrator | 2025-09-08 00:55:15.591347 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:55:15.591353 | orchestrator | Monday 08 September 2025 00:47:53 +0000 (0:00:00.752) 0:04:02.437 ****** 2025-09-08 00:55:15.591359 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591365 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591371 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591377 | orchestrator | 2025-09-08 00:55:15.591383 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:55:15.591389 | orchestrator | Monday 08 September 2025 00:47:53 +0000 (0:00:00.547) 0:04:02.985 ****** 2025-09-08 00:55:15.591395 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591402 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591408 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591414 | orchestrator | 2025-09-08 00:55:15.591420 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:55:15.591426 | orchestrator | Monday 08 September 2025 00:47:54 +0000 (0:00:00.327) 0:04:03.312 ****** 2025-09-08 00:55:15.591432 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591438 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591444 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591450 | orchestrator | 2025-09-08 00:55:15.591485 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:55:15.591492 | orchestrator | Monday 08 September 2025 00:47:54 +0000 (0:00:00.366) 0:04:03.678 ****** 2025-09-08 00:55:15.591498 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591504 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591510 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591517 | orchestrator | 2025-09-08 00:55:15.591523 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:55:15.591529 | orchestrator | Monday 08 September 2025 00:47:55 +0000 (0:00:00.771) 0:04:04.450 ****** 2025-09-08 00:55:15.591535 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591541 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591552 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591558 | orchestrator | 2025-09-08 00:55:15.591564 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:55:15.591570 | orchestrator | Monday 08 September 2025 00:47:55 +0000 (0:00:00.320) 0:04:04.771 ****** 2025-09-08 00:55:15.591577 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591583 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591589 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591595 | orchestrator | 2025-09-08 00:55:15.591618 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:55:15.591625 | orchestrator | Monday 08 September 2025 00:47:56 +0000 (0:00:00.590) 0:04:05.361 ****** 2025-09-08 00:55:15.591631 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591636 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591642 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591647 | orchestrator | 2025-09-08 00:55:15.591652 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:55:15.591657 | orchestrator | Monday 08 September 2025 00:47:57 +0000 (0:00:00.768) 0:04:06.130 ****** 2025-09-08 00:55:15.591663 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591668 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591673 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591679 | orchestrator | 2025-09-08 00:55:15.591684 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:55:15.591689 | orchestrator | Monday 08 September 2025 00:47:57 +0000 (0:00:00.763) 0:04:06.894 ****** 2025-09-08 00:55:15.591695 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591700 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591705 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591710 | orchestrator | 2025-09-08 00:55:15.591716 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:55:15.591721 | orchestrator | Monday 08 September 2025 00:47:58 +0000 (0:00:00.344) 0:04:07.238 ****** 2025-09-08 00:55:15.591726 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591732 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591737 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591742 | orchestrator | 2025-09-08 00:55:15.591751 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:55:15.591756 | orchestrator | Monday 08 September 2025 00:47:58 +0000 (0:00:00.693) 0:04:07.931 ****** 2025-09-08 00:55:15.591762 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591767 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591772 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591777 | orchestrator | 2025-09-08 00:55:15.591783 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:55:15.591788 | orchestrator | Monday 08 September 2025 00:47:59 +0000 (0:00:00.337) 0:04:08.269 ****** 2025-09-08 00:55:15.591793 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591798 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591804 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591809 | orchestrator | 2025-09-08 00:55:15.591814 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:55:15.591820 | orchestrator | Monday 08 September 2025 00:47:59 +0000 (0:00:00.313) 0:04:08.582 ****** 2025-09-08 00:55:15.591825 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591830 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591836 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591841 | orchestrator | 2025-09-08 00:55:15.591846 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:55:15.591851 | orchestrator | Monday 08 September 2025 00:47:59 +0000 (0:00:00.336) 0:04:08.918 ****** 2025-09-08 00:55:15.591857 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591862 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591867 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591877 | orchestrator | 2025-09-08 00:55:15.591882 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:55:15.591887 | orchestrator | Monday 08 September 2025 00:48:00 +0000 (0:00:00.569) 0:04:09.487 ****** 2025-09-08 00:55:15.591893 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.591898 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.591903 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.591909 | orchestrator | 2025-09-08 00:55:15.591914 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:55:15.591919 | orchestrator | Monday 08 September 2025 00:48:00 +0000 (0:00:00.356) 0:04:09.844 ****** 2025-09-08 00:55:15.591925 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591930 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591935 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591941 | orchestrator | 2025-09-08 00:55:15.591946 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:55:15.591951 | orchestrator | Monday 08 September 2025 00:48:01 +0000 (0:00:00.558) 0:04:10.402 ****** 2025-09-08 00:55:15.591957 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591962 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.591967 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591972 | orchestrator | 2025-09-08 00:55:15.591978 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:55:15.591983 | orchestrator | Monday 08 September 2025 00:48:01 +0000 (0:00:00.540) 0:04:10.943 ****** 2025-09-08 00:55:15.591988 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.591993 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.591999 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592004 | orchestrator | 2025-09-08 00:55:15.592009 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-08 00:55:15.592015 | orchestrator | Monday 08 September 2025 00:48:02 +0000 (0:00:00.926) 0:04:11.869 ****** 2025-09-08 00:55:15.592020 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592025 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.592031 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592036 | orchestrator | 2025-09-08 00:55:15.592041 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-08 00:55:15.592047 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:00.328) 0:04:12.198 ****** 2025-09-08 00:55:15.592052 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.592057 | orchestrator | 2025-09-08 00:55:15.592063 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-08 00:55:15.592068 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:00.531) 0:04:12.730 ****** 2025-09-08 00:55:15.592073 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.592079 | orchestrator | 2025-09-08 00:55:15.592084 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-08 00:55:15.592103 | orchestrator | Monday 08 September 2025 00:48:04 +0000 (0:00:00.375) 0:04:13.105 ****** 2025-09-08 00:55:15.592109 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-08 00:55:15.592114 | orchestrator | 2025-09-08 00:55:15.592120 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-08 00:55:15.592125 | orchestrator | Monday 08 September 2025 00:48:05 +0000 (0:00:01.056) 0:04:14.161 ****** 2025-09-08 00:55:15.592130 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592136 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.592141 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592147 | orchestrator | 2025-09-08 00:55:15.592152 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-08 00:55:15.592157 | orchestrator | Monday 08 September 2025 00:48:05 +0000 (0:00:00.346) 0:04:14.508 ****** 2025-09-08 00:55:15.592163 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592168 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.592173 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592183 | orchestrator | 2025-09-08 00:55:15.592188 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-08 00:55:15.592193 | orchestrator | Monday 08 September 2025 00:48:05 +0000 (0:00:00.366) 0:04:14.874 ****** 2025-09-08 00:55:15.592199 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592204 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592209 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592215 | orchestrator | 2025-09-08 00:55:15.592220 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-08 00:55:15.592226 | orchestrator | Monday 08 September 2025 00:48:07 +0000 (0:00:01.353) 0:04:16.228 ****** 2025-09-08 00:55:15.592231 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592239 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592245 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592250 | orchestrator | 2025-09-08 00:55:15.592256 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-08 00:55:15.592261 | orchestrator | Monday 08 September 2025 00:48:08 +0000 (0:00:01.064) 0:04:17.292 ****** 2025-09-08 00:55:15.592266 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592272 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592277 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592282 | orchestrator | 2025-09-08 00:55:15.592287 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-08 00:55:15.592293 | orchestrator | Monday 08 September 2025 00:48:08 +0000 (0:00:00.793) 0:04:18.085 ****** 2025-09-08 00:55:15.592298 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592303 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.592309 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592314 | orchestrator | 2025-09-08 00:55:15.592319 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-08 00:55:15.592325 | orchestrator | Monday 08 September 2025 00:48:09 +0000 (0:00:00.744) 0:04:18.830 ****** 2025-09-08 00:55:15.592330 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592335 | orchestrator | 2025-09-08 00:55:15.592341 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-08 00:55:15.592346 | orchestrator | Monday 08 September 2025 00:48:11 +0000 (0:00:01.288) 0:04:20.118 ****** 2025-09-08 00:55:15.592351 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592357 | orchestrator | 2025-09-08 00:55:15.592362 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-08 00:55:15.592367 | orchestrator | Monday 08 September 2025 00:48:11 +0000 (0:00:00.688) 0:04:20.807 ****** 2025-09-08 00:55:15.592373 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:55:15.592378 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.592383 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.592389 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:55:15.592394 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-08 00:55:15.592400 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:55:15.592405 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:55:15.592410 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-08 00:55:15.592416 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-08 00:55:15.592421 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:55:15.592426 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-08 00:55:15.592432 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-08 00:55:15.592437 | orchestrator | 2025-09-08 00:55:15.592442 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-08 00:55:15.592448 | orchestrator | Monday 08 September 2025 00:48:15 +0000 (0:00:03.476) 0:04:24.284 ****** 2025-09-08 00:55:15.592453 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592474 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592479 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592485 | orchestrator | 2025-09-08 00:55:15.592490 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-08 00:55:15.592495 | orchestrator | Monday 08 September 2025 00:48:16 +0000 (0:00:01.617) 0:04:25.902 ****** 2025-09-08 00:55:15.592501 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592506 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.592512 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592517 | orchestrator | 2025-09-08 00:55:15.592522 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-08 00:55:15.592528 | orchestrator | Monday 08 September 2025 00:48:17 +0000 (0:00:00.364) 0:04:26.266 ****** 2025-09-08 00:55:15.592533 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592539 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.592544 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592549 | orchestrator | 2025-09-08 00:55:15.592555 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-08 00:55:15.592560 | orchestrator | Monday 08 September 2025 00:48:17 +0000 (0:00:00.349) 0:04:26.616 ****** 2025-09-08 00:55:15.592566 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592571 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592576 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592582 | orchestrator | 2025-09-08 00:55:15.592601 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-08 00:55:15.592608 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:01.912) 0:04:28.528 ****** 2025-09-08 00:55:15.592613 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592619 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592624 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592629 | orchestrator | 2025-09-08 00:55:15.592634 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-08 00:55:15.592640 | orchestrator | Monday 08 September 2025 00:48:21 +0000 (0:00:02.161) 0:04:30.690 ****** 2025-09-08 00:55:15.592645 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.592650 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.592656 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.592661 | orchestrator | 2025-09-08 00:55:15.592666 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-08 00:55:15.592672 | orchestrator | Monday 08 September 2025 00:48:21 +0000 (0:00:00.331) 0:04:31.021 ****** 2025-09-08 00:55:15.592677 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.592682 | orchestrator | 2025-09-08 00:55:15.592688 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-08 00:55:15.592693 | orchestrator | Monday 08 September 2025 00:48:22 +0000 (0:00:00.576) 0:04:31.598 ****** 2025-09-08 00:55:15.592698 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.592704 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.592714 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.592720 | orchestrator | 2025-09-08 00:55:15.592725 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-08 00:55:15.592730 | orchestrator | Monday 08 September 2025 00:48:23 +0000 (0:00:00.686) 0:04:32.284 ****** 2025-09-08 00:55:15.592736 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.592741 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.592746 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.592752 | orchestrator | 2025-09-08 00:55:15.592757 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-08 00:55:15.592762 | orchestrator | Monday 08 September 2025 00:48:23 +0000 (0:00:00.400) 0:04:32.685 ****** 2025-09-08 00:55:15.592768 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.592773 | orchestrator | 2025-09-08 00:55:15.592778 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-08 00:55:15.592788 | orchestrator | Monday 08 September 2025 00:48:24 +0000 (0:00:00.567) 0:04:33.252 ****** 2025-09-08 00:55:15.592793 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592799 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592804 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592809 | orchestrator | 2025-09-08 00:55:15.592815 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-08 00:55:15.592820 | orchestrator | Monday 08 September 2025 00:48:26 +0000 (0:00:02.572) 0:04:35.824 ****** 2025-09-08 00:55:15.592825 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592831 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592836 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592841 | orchestrator | 2025-09-08 00:55:15.592846 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-08 00:55:15.592852 | orchestrator | Monday 08 September 2025 00:48:27 +0000 (0:00:01.209) 0:04:37.034 ****** 2025-09-08 00:55:15.592857 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592862 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592868 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592873 | orchestrator | 2025-09-08 00:55:15.592878 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-08 00:55:15.592884 | orchestrator | Monday 08 September 2025 00:48:29 +0000 (0:00:01.975) 0:04:39.009 ****** 2025-09-08 00:55:15.592889 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.592894 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.592900 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.592905 | orchestrator | 2025-09-08 00:55:15.592910 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-08 00:55:15.592916 | orchestrator | Monday 08 September 2025 00:48:32 +0000 (0:00:02.096) 0:04:41.106 ****** 2025-09-08 00:55:15.592921 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.592926 | orchestrator | 2025-09-08 00:55:15.592931 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-08 00:55:15.592937 | orchestrator | Monday 08 September 2025 00:48:32 +0000 (0:00:00.794) 0:04:41.900 ****** 2025-09-08 00:55:15.592942 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-08 00:55:15.592947 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592953 | orchestrator | 2025-09-08 00:55:15.592958 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-08 00:55:15.592963 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:21.875) 0:05:03.775 ****** 2025-09-08 00:55:15.592969 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.592974 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.592979 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.592985 | orchestrator | 2025-09-08 00:55:15.592990 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-08 00:55:15.592995 | orchestrator | Monday 08 September 2025 00:49:03 +0000 (0:00:09.118) 0:05:12.894 ****** 2025-09-08 00:55:15.593000 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593006 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593011 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593016 | orchestrator | 2025-09-08 00:55:15.593022 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-08 00:55:15.593027 | orchestrator | Monday 08 September 2025 00:49:04 +0000 (0:00:00.311) 0:05:13.206 ****** 2025-09-08 00:55:15.593047 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2f45f38eeb201c2a7dc63865508d9622ebd0f14b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-08 00:55:15.593058 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2f45f38eeb201c2a7dc63865508d9622ebd0f14b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-08 00:55:15.593068 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2f45f38eeb201c2a7dc63865508d9622ebd0f14b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-08 00:55:15.593075 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2f45f38eeb201c2a7dc63865508d9622ebd0f14b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-08 00:55:15.593080 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2f45f38eeb201c2a7dc63865508d9622ebd0f14b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-08 00:55:15.593086 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2f45f38eeb201c2a7dc63865508d9622ebd0f14b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2f45f38eeb201c2a7dc63865508d9622ebd0f14b'}])  2025-09-08 00:55:15.593093 | orchestrator | 2025-09-08 00:55:15.593098 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:55:15.593104 | orchestrator | Monday 08 September 2025 00:49:19 +0000 (0:00:14.999) 0:05:28.205 ****** 2025-09-08 00:55:15.593109 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593114 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593119 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593125 | orchestrator | 2025-09-08 00:55:15.593130 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-08 00:55:15.593135 | orchestrator | Monday 08 September 2025 00:49:19 +0000 (0:00:00.363) 0:05:28.569 ****** 2025-09-08 00:55:15.593141 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.593146 | orchestrator | 2025-09-08 00:55:15.593151 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-08 00:55:15.593157 | orchestrator | Monday 08 September 2025 00:49:20 +0000 (0:00:00.768) 0:05:29.337 ****** 2025-09-08 00:55:15.593162 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593167 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593173 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593178 | orchestrator | 2025-09-08 00:55:15.593183 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-08 00:55:15.593189 | orchestrator | Monday 08 September 2025 00:49:20 +0000 (0:00:00.323) 0:05:29.660 ****** 2025-09-08 00:55:15.593194 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593199 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593204 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593210 | orchestrator | 2025-09-08 00:55:15.593215 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-08 00:55:15.593220 | orchestrator | Monday 08 September 2025 00:49:20 +0000 (0:00:00.367) 0:05:30.027 ****** 2025-09-08 00:55:15.593229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:55:15.593235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:55:15.593240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:55:15.593245 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593250 | orchestrator | 2025-09-08 00:55:15.593256 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-08 00:55:15.593261 | orchestrator | Monday 08 September 2025 00:49:21 +0000 (0:00:00.687) 0:05:30.715 ****** 2025-09-08 00:55:15.593266 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593272 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593277 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593282 | orchestrator | 2025-09-08 00:55:15.593301 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-08 00:55:15.593307 | orchestrator | 2025-09-08 00:55:15.593312 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:55:15.593318 | orchestrator | Monday 08 September 2025 00:49:22 +0000 (0:00:00.831) 0:05:31.547 ****** 2025-09-08 00:55:15.593323 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.593329 | orchestrator | 2025-09-08 00:55:15.593334 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:55:15.593339 | orchestrator | Monday 08 September 2025 00:49:23 +0000 (0:00:00.553) 0:05:32.101 ****** 2025-09-08 00:55:15.593345 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.593350 | orchestrator | 2025-09-08 00:55:15.593355 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:55:15.593361 | orchestrator | Monday 08 September 2025 00:49:23 +0000 (0:00:00.523) 0:05:32.624 ****** 2025-09-08 00:55:15.593366 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593371 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593377 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593382 | orchestrator | 2025-09-08 00:55:15.593387 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:55:15.593393 | orchestrator | Monday 08 September 2025 00:49:24 +0000 (0:00:01.006) 0:05:33.630 ****** 2025-09-08 00:55:15.593401 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593407 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593412 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593417 | orchestrator | 2025-09-08 00:55:15.593423 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:55:15.593428 | orchestrator | Monday 08 September 2025 00:49:24 +0000 (0:00:00.319) 0:05:33.950 ****** 2025-09-08 00:55:15.593434 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593439 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593444 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593449 | orchestrator | 2025-09-08 00:55:15.593479 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:55:15.593485 | orchestrator | Monday 08 September 2025 00:49:25 +0000 (0:00:00.334) 0:05:34.284 ****** 2025-09-08 00:55:15.593491 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593496 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593502 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593507 | orchestrator | 2025-09-08 00:55:15.593513 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:55:15.593518 | orchestrator | Monday 08 September 2025 00:49:25 +0000 (0:00:00.331) 0:05:34.616 ****** 2025-09-08 00:55:15.593523 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593529 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593534 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593540 | orchestrator | 2025-09-08 00:55:15.593545 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:55:15.593555 | orchestrator | Monday 08 September 2025 00:49:26 +0000 (0:00:01.039) 0:05:35.656 ****** 2025-09-08 00:55:15.593561 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593566 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593572 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593577 | orchestrator | 2025-09-08 00:55:15.593582 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:55:15.593588 | orchestrator | Monday 08 September 2025 00:49:26 +0000 (0:00:00.349) 0:05:36.005 ****** 2025-09-08 00:55:15.593593 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593599 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593604 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593609 | orchestrator | 2025-09-08 00:55:15.593615 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:55:15.593620 | orchestrator | Monday 08 September 2025 00:49:27 +0000 (0:00:00.319) 0:05:36.325 ****** 2025-09-08 00:55:15.593626 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593631 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593636 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593642 | orchestrator | 2025-09-08 00:55:15.593647 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:55:15.593653 | orchestrator | Monday 08 September 2025 00:49:28 +0000 (0:00:00.779) 0:05:37.104 ****** 2025-09-08 00:55:15.593658 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593663 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593669 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593674 | orchestrator | 2025-09-08 00:55:15.593679 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:55:15.593685 | orchestrator | Monday 08 September 2025 00:49:29 +0000 (0:00:01.173) 0:05:38.278 ****** 2025-09-08 00:55:15.593690 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593696 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593701 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593707 | orchestrator | 2025-09-08 00:55:15.593712 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:55:15.593717 | orchestrator | Monday 08 September 2025 00:49:29 +0000 (0:00:00.350) 0:05:38.628 ****** 2025-09-08 00:55:15.593723 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593728 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593734 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593739 | orchestrator | 2025-09-08 00:55:15.593744 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:55:15.593750 | orchestrator | Monday 08 September 2025 00:49:29 +0000 (0:00:00.331) 0:05:38.960 ****** 2025-09-08 00:55:15.593755 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593761 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593766 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593771 | orchestrator | 2025-09-08 00:55:15.593777 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:55:15.593782 | orchestrator | Monday 08 September 2025 00:49:30 +0000 (0:00:00.317) 0:05:39.277 ****** 2025-09-08 00:55:15.593788 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593793 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593815 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593821 | orchestrator | 2025-09-08 00:55:15.593827 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:55:15.593831 | orchestrator | Monday 08 September 2025 00:49:30 +0000 (0:00:00.606) 0:05:39.884 ****** 2025-09-08 00:55:15.593836 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593841 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593845 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593850 | orchestrator | 2025-09-08 00:55:15.593855 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:55:15.593860 | orchestrator | Monday 08 September 2025 00:49:31 +0000 (0:00:00.346) 0:05:40.230 ****** 2025-09-08 00:55:15.593868 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593873 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593877 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593882 | orchestrator | 2025-09-08 00:55:15.593887 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:55:15.593892 | orchestrator | Monday 08 September 2025 00:49:31 +0000 (0:00:00.322) 0:05:40.552 ****** 2025-09-08 00:55:15.593896 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.593901 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.593906 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.593910 | orchestrator | 2025-09-08 00:55:15.593915 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:55:15.593920 | orchestrator | Monday 08 September 2025 00:49:31 +0000 (0:00:00.417) 0:05:40.970 ****** 2025-09-08 00:55:15.593925 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593932 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593937 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593942 | orchestrator | 2025-09-08 00:55:15.593946 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:55:15.593951 | orchestrator | Monday 08 September 2025 00:49:32 +0000 (0:00:00.449) 0:05:41.419 ****** 2025-09-08 00:55:15.593956 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593961 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593965 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593970 | orchestrator | 2025-09-08 00:55:15.593975 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:55:15.593980 | orchestrator | Monday 08 September 2025 00:49:32 +0000 (0:00:00.627) 0:05:42.047 ****** 2025-09-08 00:55:15.593984 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.593989 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.593994 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.593998 | orchestrator | 2025-09-08 00:55:15.594003 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-08 00:55:15.594008 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:00.577) 0:05:42.624 ****** 2025-09-08 00:55:15.594027 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:55:15.594032 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:55:15.594038 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:55:15.594043 | orchestrator | 2025-09-08 00:55:15.594047 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-08 00:55:15.594052 | orchestrator | Monday 08 September 2025 00:49:34 +0000 (0:00:00.859) 0:05:43.484 ****** 2025-09-08 00:55:15.594057 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.594062 | orchestrator | 2025-09-08 00:55:15.594066 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-08 00:55:15.594071 | orchestrator | Monday 08 September 2025 00:49:35 +0000 (0:00:00.758) 0:05:44.243 ****** 2025-09-08 00:55:15.594076 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.594081 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.594085 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.594090 | orchestrator | 2025-09-08 00:55:15.594095 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-08 00:55:15.594099 | orchestrator | Monday 08 September 2025 00:49:35 +0000 (0:00:00.732) 0:05:44.975 ****** 2025-09-08 00:55:15.594104 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.594109 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.594114 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.594118 | orchestrator | 2025-09-08 00:55:15.594123 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-08 00:55:15.594128 | orchestrator | Monday 08 September 2025 00:49:36 +0000 (0:00:00.321) 0:05:45.297 ****** 2025-09-08 00:55:15.594136 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:55:15.594141 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:55:15.594145 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:55:15.594150 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-08 00:55:15.594155 | orchestrator | 2025-09-08 00:55:15.594160 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-08 00:55:15.594164 | orchestrator | Monday 08 September 2025 00:49:47 +0000 (0:00:10.895) 0:05:56.193 ****** 2025-09-08 00:55:15.594169 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.594174 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.594178 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.594183 | orchestrator | 2025-09-08 00:55:15.594188 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-08 00:55:15.594193 | orchestrator | Monday 08 September 2025 00:49:47 +0000 (0:00:00.579) 0:05:56.772 ****** 2025-09-08 00:55:15.594197 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-08 00:55:15.594202 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 00:55:15.594207 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 00:55:15.594211 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-08 00:55:15.594216 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.594221 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.594226 | orchestrator | 2025-09-08 00:55:15.594245 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:55:15.594250 | orchestrator | Monday 08 September 2025 00:49:49 +0000 (0:00:02.085) 0:05:58.857 ****** 2025-09-08 00:55:15.594255 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-08 00:55:15.594260 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 00:55:15.594265 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 00:55:15.594269 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:55:15.594274 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-08 00:55:15.594279 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-08 00:55:15.594284 | orchestrator | 2025-09-08 00:55:15.594288 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-08 00:55:15.594293 | orchestrator | Monday 08 September 2025 00:49:50 +0000 (0:00:01.156) 0:06:00.014 ****** 2025-09-08 00:55:15.594298 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.594303 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.594308 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.594312 | orchestrator | 2025-09-08 00:55:15.594317 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-08 00:55:15.594322 | orchestrator | Monday 08 September 2025 00:49:51 +0000 (0:00:00.679) 0:06:00.693 ****** 2025-09-08 00:55:15.594327 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.594332 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.594336 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.594341 | orchestrator | 2025-09-08 00:55:15.594346 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-08 00:55:15.594353 | orchestrator | Monday 08 September 2025 00:49:52 +0000 (0:00:00.545) 0:06:01.238 ****** 2025-09-08 00:55:15.594358 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.594363 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.594368 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.594372 | orchestrator | 2025-09-08 00:55:15.594377 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-08 00:55:15.594382 | orchestrator | Monday 08 September 2025 00:49:52 +0000 (0:00:00.304) 0:06:01.543 ****** 2025-09-08 00:55:15.594387 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.594392 | orchestrator | 2025-09-08 00:55:15.594400 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-08 00:55:15.594404 | orchestrator | Monday 08 September 2025 00:49:52 +0000 (0:00:00.544) 0:06:02.087 ****** 2025-09-08 00:55:15.594409 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.594414 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.594419 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.594424 | orchestrator | 2025-09-08 00:55:15.594428 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-08 00:55:15.594433 | orchestrator | Monday 08 September 2025 00:49:53 +0000 (0:00:00.343) 0:06:02.431 ****** 2025-09-08 00:55:15.594438 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.594443 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.594447 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.594452 | orchestrator | 2025-09-08 00:55:15.594467 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-08 00:55:15.594472 | orchestrator | Monday 08 September 2025 00:49:53 +0000 (0:00:00.611) 0:06:03.042 ****** 2025-09-08 00:55:15.594477 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.594482 | orchestrator | 2025-09-08 00:55:15.594487 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-08 00:55:15.594491 | orchestrator | Monday 08 September 2025 00:49:54 +0000 (0:00:00.522) 0:06:03.565 ****** 2025-09-08 00:55:15.594496 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.594501 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.594506 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.594510 | orchestrator | 2025-09-08 00:55:15.594515 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-08 00:55:15.594520 | orchestrator | Monday 08 September 2025 00:49:55 +0000 (0:00:01.170) 0:06:04.735 ****** 2025-09-08 00:55:15.594525 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.594529 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.594534 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.594539 | orchestrator | 2025-09-08 00:55:15.594543 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-08 00:55:15.594548 | orchestrator | Monday 08 September 2025 00:49:57 +0000 (0:00:01.474) 0:06:06.209 ****** 2025-09-08 00:55:15.594553 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.594558 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.594562 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.594567 | orchestrator | 2025-09-08 00:55:15.594572 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-08 00:55:15.594577 | orchestrator | Monday 08 September 2025 00:49:58 +0000 (0:00:01.744) 0:06:07.954 ****** 2025-09-08 00:55:15.594581 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.594586 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.594591 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.594596 | orchestrator | 2025-09-08 00:55:15.594600 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-08 00:55:15.594605 | orchestrator | Monday 08 September 2025 00:50:00 +0000 (0:00:01.850) 0:06:09.804 ****** 2025-09-08 00:55:15.594610 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.594615 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.594619 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-08 00:55:15.594624 | orchestrator | 2025-09-08 00:55:15.594629 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-08 00:55:15.594634 | orchestrator | Monday 08 September 2025 00:50:01 +0000 (0:00:00.440) 0:06:10.244 ****** 2025-09-08 00:55:15.594638 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-08 00:55:15.594656 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-08 00:55:15.594666 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-08 00:55:15.594671 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-08 00:55:15.594676 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.594680 | orchestrator | 2025-09-08 00:55:15.594685 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-08 00:55:15.594690 | orchestrator | Monday 08 September 2025 00:50:25 +0000 (0:00:24.646) 0:06:34.891 ****** 2025-09-08 00:55:15.594695 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.594699 | orchestrator | 2025-09-08 00:55:15.594704 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-08 00:55:15.594709 | orchestrator | Monday 08 September 2025 00:50:27 +0000 (0:00:01.268) 0:06:36.160 ****** 2025-09-08 00:55:15.594713 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.594718 | orchestrator | 2025-09-08 00:55:15.594723 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-08 00:55:15.594728 | orchestrator | Monday 08 September 2025 00:50:27 +0000 (0:00:00.307) 0:06:36.467 ****** 2025-09-08 00:55:15.594732 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.594737 | orchestrator | 2025-09-08 00:55:15.594742 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-08 00:55:15.594747 | orchestrator | Monday 08 September 2025 00:50:27 +0000 (0:00:00.145) 0:06:36.613 ****** 2025-09-08 00:55:15.594751 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-08 00:55:15.594756 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-08 00:55:15.594761 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-08 00:55:15.594766 | orchestrator | 2025-09-08 00:55:15.594771 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-08 00:55:15.594775 | orchestrator | Monday 08 September 2025 00:50:33 +0000 (0:00:06.334) 0:06:42.947 ****** 2025-09-08 00:55:15.594780 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-08 00:55:15.594785 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-08 00:55:15.594790 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-08 00:55:15.594795 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-08 00:55:15.594799 | orchestrator | 2025-09-08 00:55:15.594804 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:55:15.594809 | orchestrator | Monday 08 September 2025 00:50:38 +0000 (0:00:04.656) 0:06:47.604 ****** 2025-09-08 00:55:15.594814 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.594818 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.594823 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.594828 | orchestrator | 2025-09-08 00:55:15.594833 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-08 00:55:15.594837 | orchestrator | Monday 08 September 2025 00:50:39 +0000 (0:00:00.880) 0:06:48.484 ****** 2025-09-08 00:55:15.594842 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.594847 | orchestrator | 2025-09-08 00:55:15.594852 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-08 00:55:15.594856 | orchestrator | Monday 08 September 2025 00:50:39 +0000 (0:00:00.538) 0:06:49.023 ****** 2025-09-08 00:55:15.594861 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.594866 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.594871 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.594875 | orchestrator | 2025-09-08 00:55:15.594880 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-08 00:55:15.594885 | orchestrator | Monday 08 September 2025 00:50:40 +0000 (0:00:00.356) 0:06:49.380 ****** 2025-09-08 00:55:15.594893 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.594898 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.594903 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.594907 | orchestrator | 2025-09-08 00:55:15.594912 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-08 00:55:15.594917 | orchestrator | Monday 08 September 2025 00:50:41 +0000 (0:00:01.447) 0:06:50.828 ****** 2025-09-08 00:55:15.594922 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:55:15.594926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:55:15.594931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:55:15.594936 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.594941 | orchestrator | 2025-09-08 00:55:15.594945 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-08 00:55:15.594950 | orchestrator | Monday 08 September 2025 00:50:42 +0000 (0:00:00.605) 0:06:51.433 ****** 2025-09-08 00:55:15.594955 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.594960 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.594964 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.594969 | orchestrator | 2025-09-08 00:55:15.594974 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-08 00:55:15.594979 | orchestrator | 2025-09-08 00:55:15.594983 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:55:15.594988 | orchestrator | Monday 08 September 2025 00:50:42 +0000 (0:00:00.530) 0:06:51.964 ****** 2025-09-08 00:55:15.595027 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.595038 | orchestrator | 2025-09-08 00:55:15.595043 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:55:15.595064 | orchestrator | Monday 08 September 2025 00:50:43 +0000 (0:00:00.687) 0:06:52.651 ****** 2025-09-08 00:55:15.595069 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.595074 | orchestrator | 2025-09-08 00:55:15.595079 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:55:15.595084 | orchestrator | Monday 08 September 2025 00:50:44 +0000 (0:00:00.513) 0:06:53.165 ****** 2025-09-08 00:55:15.595088 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595093 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595098 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595103 | orchestrator | 2025-09-08 00:55:15.595108 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:55:15.595112 | orchestrator | Monday 08 September 2025 00:50:44 +0000 (0:00:00.291) 0:06:53.457 ****** 2025-09-08 00:55:15.595117 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595122 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595127 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595131 | orchestrator | 2025-09-08 00:55:15.595136 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:55:15.595141 | orchestrator | Monday 08 September 2025 00:50:45 +0000 (0:00:00.919) 0:06:54.377 ****** 2025-09-08 00:55:15.595145 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595150 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595155 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595159 | orchestrator | 2025-09-08 00:55:15.595164 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:55:15.595172 | orchestrator | Monday 08 September 2025 00:50:46 +0000 (0:00:00.752) 0:06:55.129 ****** 2025-09-08 00:55:15.595177 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595181 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595186 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595191 | orchestrator | 2025-09-08 00:55:15.595195 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:55:15.595204 | orchestrator | Monday 08 September 2025 00:50:46 +0000 (0:00:00.783) 0:06:55.913 ****** 2025-09-08 00:55:15.595209 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595214 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595218 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595223 | orchestrator | 2025-09-08 00:55:15.595228 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:55:15.595232 | orchestrator | Monday 08 September 2025 00:50:47 +0000 (0:00:00.299) 0:06:56.213 ****** 2025-09-08 00:55:15.595237 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595242 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595246 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595251 | orchestrator | 2025-09-08 00:55:15.595256 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:55:15.595261 | orchestrator | Monday 08 September 2025 00:50:47 +0000 (0:00:00.589) 0:06:56.802 ****** 2025-09-08 00:55:15.595265 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595270 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595275 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595279 | orchestrator | 2025-09-08 00:55:15.595284 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:55:15.595289 | orchestrator | Monday 08 September 2025 00:50:48 +0000 (0:00:00.314) 0:06:57.117 ****** 2025-09-08 00:55:15.595294 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595298 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595303 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595308 | orchestrator | 2025-09-08 00:55:15.595313 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:55:15.595317 | orchestrator | Monday 08 September 2025 00:50:48 +0000 (0:00:00.680) 0:06:57.798 ****** 2025-09-08 00:55:15.595322 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595327 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595331 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595336 | orchestrator | 2025-09-08 00:55:15.595341 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:55:15.595346 | orchestrator | Monday 08 September 2025 00:50:49 +0000 (0:00:00.734) 0:06:58.532 ****** 2025-09-08 00:55:15.595350 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595355 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595360 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595365 | orchestrator | 2025-09-08 00:55:15.595369 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:55:15.595374 | orchestrator | Monday 08 September 2025 00:50:49 +0000 (0:00:00.540) 0:06:59.073 ****** 2025-09-08 00:55:15.595379 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595383 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595388 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595393 | orchestrator | 2025-09-08 00:55:15.595398 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:55:15.595402 | orchestrator | Monday 08 September 2025 00:50:50 +0000 (0:00:00.303) 0:06:59.376 ****** 2025-09-08 00:55:15.595407 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595412 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595416 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595421 | orchestrator | 2025-09-08 00:55:15.595426 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:55:15.595431 | orchestrator | Monday 08 September 2025 00:50:50 +0000 (0:00:00.332) 0:06:59.709 ****** 2025-09-08 00:55:15.595435 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595440 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595445 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595449 | orchestrator | 2025-09-08 00:55:15.595454 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:55:15.595472 | orchestrator | Monday 08 September 2025 00:50:50 +0000 (0:00:00.356) 0:07:00.065 ****** 2025-09-08 00:55:15.595479 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595484 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595489 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595494 | orchestrator | 2025-09-08 00:55:15.595498 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:55:15.595503 | orchestrator | Monday 08 September 2025 00:50:51 +0000 (0:00:00.638) 0:07:00.704 ****** 2025-09-08 00:55:15.595510 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595515 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595520 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595525 | orchestrator | 2025-09-08 00:55:15.595529 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:55:15.595534 | orchestrator | Monday 08 September 2025 00:50:51 +0000 (0:00:00.302) 0:07:01.006 ****** 2025-09-08 00:55:15.595539 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595544 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595548 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595553 | orchestrator | 2025-09-08 00:55:15.595558 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:55:15.595563 | orchestrator | Monday 08 September 2025 00:50:52 +0000 (0:00:00.316) 0:07:01.323 ****** 2025-09-08 00:55:15.595567 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595572 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595577 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595582 | orchestrator | 2025-09-08 00:55:15.595586 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:55:15.595591 | orchestrator | Monday 08 September 2025 00:50:52 +0000 (0:00:00.303) 0:07:01.626 ****** 2025-09-08 00:55:15.595596 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595601 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595605 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595610 | orchestrator | 2025-09-08 00:55:15.595615 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:55:15.595622 | orchestrator | Monday 08 September 2025 00:50:53 +0000 (0:00:00.566) 0:07:02.193 ****** 2025-09-08 00:55:15.595627 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595632 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595637 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595641 | orchestrator | 2025-09-08 00:55:15.595646 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-08 00:55:15.595651 | orchestrator | Monday 08 September 2025 00:50:53 +0000 (0:00:00.574) 0:07:02.768 ****** 2025-09-08 00:55:15.595656 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595660 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595665 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595670 | orchestrator | 2025-09-08 00:55:15.595674 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-08 00:55:15.595679 | orchestrator | Monday 08 September 2025 00:50:53 +0000 (0:00:00.319) 0:07:03.087 ****** 2025-09-08 00:55:15.595684 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:55:15.595689 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:55:15.595693 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:55:15.595698 | orchestrator | 2025-09-08 00:55:15.595703 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-08 00:55:15.595708 | orchestrator | Monday 08 September 2025 00:50:54 +0000 (0:00:00.919) 0:07:04.006 ****** 2025-09-08 00:55:15.595713 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.595717 | orchestrator | 2025-09-08 00:55:15.595722 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-08 00:55:15.595727 | orchestrator | Monday 08 September 2025 00:50:55 +0000 (0:00:00.807) 0:07:04.814 ****** 2025-09-08 00:55:15.595735 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595740 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595744 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595749 | orchestrator | 2025-09-08 00:55:15.595754 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-08 00:55:15.595759 | orchestrator | Monday 08 September 2025 00:50:56 +0000 (0:00:00.308) 0:07:05.123 ****** 2025-09-08 00:55:15.595763 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595768 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595773 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595778 | orchestrator | 2025-09-08 00:55:15.595783 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-08 00:55:15.595787 | orchestrator | Monday 08 September 2025 00:50:56 +0000 (0:00:00.306) 0:07:05.430 ****** 2025-09-08 00:55:15.595792 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595797 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595801 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595806 | orchestrator | 2025-09-08 00:55:15.595811 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-08 00:55:15.595816 | orchestrator | Monday 08 September 2025 00:50:57 +0000 (0:00:00.921) 0:07:06.352 ****** 2025-09-08 00:55:15.595820 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.595825 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.595830 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.595835 | orchestrator | 2025-09-08 00:55:15.595839 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-08 00:55:15.595844 | orchestrator | Monday 08 September 2025 00:50:57 +0000 (0:00:00.348) 0:07:06.700 ****** 2025-09-08 00:55:15.595849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-08 00:55:15.595854 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-08 00:55:15.595858 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-08 00:55:15.595863 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-08 00:55:15.595868 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-08 00:55:15.595873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-08 00:55:15.595878 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-08 00:55:15.595885 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-08 00:55:15.595890 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-08 00:55:15.595895 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-08 00:55:15.595900 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-08 00:55:15.595904 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-08 00:55:15.595909 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-08 00:55:15.595914 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-08 00:55:15.595919 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-08 00:55:15.595923 | orchestrator | 2025-09-08 00:55:15.595928 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-08 00:55:15.595933 | orchestrator | Monday 08 September 2025 00:51:00 +0000 (0:00:03.134) 0:07:09.835 ****** 2025-09-08 00:55:15.595938 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.595942 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.595947 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.595974 | orchestrator | 2025-09-08 00:55:15.595979 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-08 00:55:15.595993 | orchestrator | Monday 08 September 2025 00:51:01 +0000 (0:00:00.308) 0:07:10.144 ****** 2025-09-08 00:55:15.595998 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.596003 | orchestrator | 2025-09-08 00:55:15.596008 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-08 00:55:15.596013 | orchestrator | Monday 08 September 2025 00:51:01 +0000 (0:00:00.840) 0:07:10.984 ****** 2025-09-08 00:55:15.596017 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-08 00:55:15.596022 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-08 00:55:15.596027 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-08 00:55:15.596032 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-08 00:55:15.596037 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-08 00:55:15.596042 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-08 00:55:15.596046 | orchestrator | 2025-09-08 00:55:15.596051 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-08 00:55:15.596056 | orchestrator | Monday 08 September 2025 00:51:02 +0000 (0:00:01.052) 0:07:12.037 ****** 2025-09-08 00:55:15.596060 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.596065 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:55:15.596070 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:55:15.596075 | orchestrator | 2025-09-08 00:55:15.596080 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:55:15.596084 | orchestrator | Monday 08 September 2025 00:51:05 +0000 (0:00:02.126) 0:07:14.163 ****** 2025-09-08 00:55:15.596089 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:55:15.596094 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:55:15.596099 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.596103 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:55:15.596108 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-08 00:55:15.596113 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.596118 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:55:15.596122 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-08 00:55:15.596127 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.596132 | orchestrator | 2025-09-08 00:55:15.596137 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-08 00:55:15.596142 | orchestrator | Monday 08 September 2025 00:51:06 +0000 (0:00:01.437) 0:07:15.601 ****** 2025-09-08 00:55:15.596146 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.596151 | orchestrator | 2025-09-08 00:55:15.596156 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-08 00:55:15.596161 | orchestrator | Monday 08 September 2025 00:51:08 +0000 (0:00:02.137) 0:07:17.739 ****** 2025-09-08 00:55:15.596166 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.596170 | orchestrator | 2025-09-08 00:55:15.596175 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-08 00:55:15.596180 | orchestrator | Monday 08 September 2025 00:51:09 +0000 (0:00:00.532) 0:07:18.271 ****** 2025-09-08 00:55:15.596185 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6b18b724-0587-5812-9148-41071cea985b', 'data_vg': 'ceph-6b18b724-0587-5812-9148-41071cea985b'}) 2025-09-08 00:55:15.596190 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b', 'data_vg': 'ceph-ea3e0024-52d1-5c15-9011-f3e2d7c1d29b'}) 2025-09-08 00:55:15.596195 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-df550631-cfd3-5799-aa47-c702e103b9e1', 'data_vg': 'ceph-df550631-cfd3-5799-aa47-c702e103b9e1'}) 2025-09-08 00:55:15.596204 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa077d44-869a-533b-aa21-81dea0f926a7', 'data_vg': 'ceph-aa077d44-869a-533b-aa21-81dea0f926a7'}) 2025-09-08 00:55:15.596211 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b42feaf-b3bc-5f68-b3eb-37674b93132b', 'data_vg': 'ceph-9b42feaf-b3bc-5f68-b3eb-37674b93132b'}) 2025-09-08 00:55:15.596216 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eee7454c-3e15-5681-817b-16336d12a7fd', 'data_vg': 'ceph-eee7454c-3e15-5681-817b-16336d12a7fd'}) 2025-09-08 00:55:15.596221 | orchestrator | 2025-09-08 00:55:15.596226 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-08 00:55:15.596231 | orchestrator | Monday 08 September 2025 00:51:48 +0000 (0:00:39.492) 0:07:57.763 ****** 2025-09-08 00:55:15.596235 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596240 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596245 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.596250 | orchestrator | 2025-09-08 00:55:15.596254 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-08 00:55:15.596259 | orchestrator | Monday 08 September 2025 00:51:49 +0000 (0:00:00.545) 0:07:58.309 ****** 2025-09-08 00:55:15.596264 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.596269 | orchestrator | 2025-09-08 00:55:15.596273 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-08 00:55:15.596278 | orchestrator | Monday 08 September 2025 00:51:49 +0000 (0:00:00.513) 0:07:58.822 ****** 2025-09-08 00:55:15.596286 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.596291 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.596296 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.596301 | orchestrator | 2025-09-08 00:55:15.596305 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-08 00:55:15.596310 | orchestrator | Monday 08 September 2025 00:51:50 +0000 (0:00:00.682) 0:07:59.505 ****** 2025-09-08 00:55:15.596315 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.596320 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.596324 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.596329 | orchestrator | 2025-09-08 00:55:15.596334 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-08 00:55:15.596339 | orchestrator | Monday 08 September 2025 00:51:53 +0000 (0:00:02.895) 0:08:02.401 ****** 2025-09-08 00:55:15.596344 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.596348 | orchestrator | 2025-09-08 00:55:15.596353 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-08 00:55:15.596358 | orchestrator | Monday 08 September 2025 00:51:53 +0000 (0:00:00.541) 0:08:02.942 ****** 2025-09-08 00:55:15.596363 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.596367 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.596372 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.596377 | orchestrator | 2025-09-08 00:55:15.596382 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-08 00:55:15.596386 | orchestrator | Monday 08 September 2025 00:51:54 +0000 (0:00:01.126) 0:08:04.069 ****** 2025-09-08 00:55:15.596391 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.596396 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.596401 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.596405 | orchestrator | 2025-09-08 00:55:15.596410 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-08 00:55:15.596415 | orchestrator | Monday 08 September 2025 00:51:56 +0000 (0:00:01.441) 0:08:05.510 ****** 2025-09-08 00:55:15.596420 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.596424 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.596433 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.596438 | orchestrator | 2025-09-08 00:55:15.596443 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-08 00:55:15.596448 | orchestrator | Monday 08 September 2025 00:51:58 +0000 (0:00:01.743) 0:08:07.253 ****** 2025-09-08 00:55:15.596452 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596483 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596488 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.596493 | orchestrator | 2025-09-08 00:55:15.596497 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-08 00:55:15.596502 | orchestrator | Monday 08 September 2025 00:51:58 +0000 (0:00:00.339) 0:08:07.592 ****** 2025-09-08 00:55:15.596507 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596512 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596516 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.596521 | orchestrator | 2025-09-08 00:55:15.596526 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-08 00:55:15.596531 | orchestrator | Monday 08 September 2025 00:51:58 +0000 (0:00:00.329) 0:08:07.922 ****** 2025-09-08 00:55:15.596535 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:55:15.596540 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-08 00:55:15.596545 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-08 00:55:15.596550 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-08 00:55:15.596554 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-09-08 00:55:15.596559 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-08 00:55:15.596564 | orchestrator | 2025-09-08 00:55:15.596568 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-08 00:55:15.596573 | orchestrator | Monday 08 September 2025 00:52:00 +0000 (0:00:01.298) 0:08:09.220 ****** 2025-09-08 00:55:15.596578 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-08 00:55:15.596582 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-08 00:55:15.596587 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-08 00:55:15.596592 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-08 00:55:15.596597 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-08 00:55:15.596601 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-08 00:55:15.596606 | orchestrator | 2025-09-08 00:55:15.596611 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-08 00:55:15.596616 | orchestrator | Monday 08 September 2025 00:52:02 +0000 (0:00:02.423) 0:08:11.644 ****** 2025-09-08 00:55:15.596623 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-08 00:55:15.596628 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-08 00:55:15.596633 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-08 00:55:15.596637 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-08 00:55:15.596642 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-08 00:55:15.596647 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-08 00:55:15.596651 | orchestrator | 2025-09-08 00:55:15.596656 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-08 00:55:15.596661 | orchestrator | Monday 08 September 2025 00:52:06 +0000 (0:00:03.481) 0:08:15.126 ****** 2025-09-08 00:55:15.596666 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596670 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596675 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.596680 | orchestrator | 2025-09-08 00:55:15.596685 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-08 00:55:15.596689 | orchestrator | Monday 08 September 2025 00:52:09 +0000 (0:00:03.191) 0:08:18.318 ****** 2025-09-08 00:55:15.596694 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596698 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596703 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-08 00:55:15.596711 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.596715 | orchestrator | 2025-09-08 00:55:15.596722 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-08 00:55:15.596727 | orchestrator | Monday 08 September 2025 00:52:22 +0000 (0:00:13.104) 0:08:31.422 ****** 2025-09-08 00:55:15.596731 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596736 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596740 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.596745 | orchestrator | 2025-09-08 00:55:15.596749 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:55:15.596754 | orchestrator | Monday 08 September 2025 00:52:23 +0000 (0:00:00.966) 0:08:32.388 ****** 2025-09-08 00:55:15.596759 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596763 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596767 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.596772 | orchestrator | 2025-09-08 00:55:15.596776 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-08 00:55:15.596781 | orchestrator | Monday 08 September 2025 00:52:23 +0000 (0:00:00.603) 0:08:32.992 ****** 2025-09-08 00:55:15.596785 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.596790 | orchestrator | 2025-09-08 00:55:15.596794 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-08 00:55:15.596799 | orchestrator | Monday 08 September 2025 00:52:24 +0000 (0:00:00.569) 0:08:33.562 ****** 2025-09-08 00:55:15.596803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.596808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.596812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.596817 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596821 | orchestrator | 2025-09-08 00:55:15.596826 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-08 00:55:15.596830 | orchestrator | Monday 08 September 2025 00:52:24 +0000 (0:00:00.379) 0:08:33.942 ****** 2025-09-08 00:55:15.596835 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596839 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596844 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.596848 | orchestrator | 2025-09-08 00:55:15.596853 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-08 00:55:15.596857 | orchestrator | Monday 08 September 2025 00:52:25 +0000 (0:00:00.310) 0:08:34.252 ****** 2025-09-08 00:55:15.596862 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596866 | orchestrator | 2025-09-08 00:55:15.596871 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-08 00:55:15.596875 | orchestrator | Monday 08 September 2025 00:52:25 +0000 (0:00:00.253) 0:08:34.505 ****** 2025-09-08 00:55:15.596880 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596884 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.596889 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.596893 | orchestrator | 2025-09-08 00:55:15.596898 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-08 00:55:15.596902 | orchestrator | Monday 08 September 2025 00:52:26 +0000 (0:00:00.624) 0:08:35.130 ****** 2025-09-08 00:55:15.596907 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596911 | orchestrator | 2025-09-08 00:55:15.596916 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-08 00:55:15.596920 | orchestrator | Monday 08 September 2025 00:52:26 +0000 (0:00:00.228) 0:08:35.359 ****** 2025-09-08 00:55:15.596925 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596929 | orchestrator | 2025-09-08 00:55:15.596934 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-08 00:55:15.596938 | orchestrator | Monday 08 September 2025 00:52:26 +0000 (0:00:00.229) 0:08:35.588 ****** 2025-09-08 00:55:15.596947 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596951 | orchestrator | 2025-09-08 00:55:15.596956 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-08 00:55:15.596960 | orchestrator | Monday 08 September 2025 00:52:26 +0000 (0:00:00.137) 0:08:35.725 ****** 2025-09-08 00:55:15.596965 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596969 | orchestrator | 2025-09-08 00:55:15.596974 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-08 00:55:15.596978 | orchestrator | Monday 08 September 2025 00:52:26 +0000 (0:00:00.224) 0:08:35.950 ****** 2025-09-08 00:55:15.596983 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.596987 | orchestrator | 2025-09-08 00:55:15.596992 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-08 00:55:15.596998 | orchestrator | Monday 08 September 2025 00:52:27 +0000 (0:00:00.235) 0:08:36.185 ****** 2025-09-08 00:55:15.597003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.597008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.597012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.597017 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597021 | orchestrator | 2025-09-08 00:55:15.597026 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-08 00:55:15.597030 | orchestrator | Monday 08 September 2025 00:52:27 +0000 (0:00:00.393) 0:08:36.579 ****** 2025-09-08 00:55:15.597035 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597039 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597043 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597048 | orchestrator | 2025-09-08 00:55:15.597052 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-08 00:55:15.597057 | orchestrator | Monday 08 September 2025 00:52:27 +0000 (0:00:00.296) 0:08:36.876 ****** 2025-09-08 00:55:15.597061 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597066 | orchestrator | 2025-09-08 00:55:15.597070 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-08 00:55:15.597075 | orchestrator | Monday 08 September 2025 00:52:28 +0000 (0:00:00.825) 0:08:37.702 ****** 2025-09-08 00:55:15.597079 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597084 | orchestrator | 2025-09-08 00:55:15.597092 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-08 00:55:15.597097 | orchestrator | 2025-09-08 00:55:15.597101 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:55:15.597106 | orchestrator | Monday 08 September 2025 00:52:29 +0000 (0:00:00.689) 0:08:38.391 ****** 2025-09-08 00:55:15.597111 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.597116 | orchestrator | 2025-09-08 00:55:15.597121 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:55:15.597125 | orchestrator | Monday 08 September 2025 00:52:30 +0000 (0:00:01.279) 0:08:39.670 ****** 2025-09-08 00:55:15.597130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.597135 | orchestrator | 2025-09-08 00:55:15.597139 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:55:15.597144 | orchestrator | Monday 08 September 2025 00:52:31 +0000 (0:00:01.321) 0:08:40.991 ****** 2025-09-08 00:55:15.597148 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597153 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597157 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597162 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597166 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597171 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597178 | orchestrator | 2025-09-08 00:55:15.597183 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:55:15.597187 | orchestrator | Monday 08 September 2025 00:52:33 +0000 (0:00:01.392) 0:08:42.384 ****** 2025-09-08 00:55:15.597192 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597196 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597201 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597205 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597210 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597214 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597219 | orchestrator | 2025-09-08 00:55:15.597223 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:55:15.597228 | orchestrator | Monday 08 September 2025 00:52:34 +0000 (0:00:00.747) 0:08:43.132 ****** 2025-09-08 00:55:15.597232 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597237 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597241 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597246 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597250 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597255 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597259 | orchestrator | 2025-09-08 00:55:15.597264 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:55:15.597268 | orchestrator | Monday 08 September 2025 00:52:34 +0000 (0:00:00.959) 0:08:44.092 ****** 2025-09-08 00:55:15.597273 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597277 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597282 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597286 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597290 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597295 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597299 | orchestrator | 2025-09-08 00:55:15.597304 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:55:15.597309 | orchestrator | Monday 08 September 2025 00:52:35 +0000 (0:00:00.698) 0:08:44.791 ****** 2025-09-08 00:55:15.597313 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597317 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597322 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597326 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597331 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597335 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597340 | orchestrator | 2025-09-08 00:55:15.597344 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:55:15.597349 | orchestrator | Monday 08 September 2025 00:52:36 +0000 (0:00:01.047) 0:08:45.838 ****** 2025-09-08 00:55:15.597354 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597358 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597362 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597367 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597371 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597376 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597380 | orchestrator | 2025-09-08 00:55:15.597385 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:55:15.597391 | orchestrator | Monday 08 September 2025 00:52:37 +0000 (0:00:00.900) 0:08:46.739 ****** 2025-09-08 00:55:15.597396 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597400 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597405 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597409 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597414 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597418 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597423 | orchestrator | 2025-09-08 00:55:15.597427 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:55:15.597432 | orchestrator | Monday 08 September 2025 00:52:38 +0000 (0:00:00.580) 0:08:47.319 ****** 2025-09-08 00:55:15.597439 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597444 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597449 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597453 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597469 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597474 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597478 | orchestrator | 2025-09-08 00:55:15.597483 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:55:15.597487 | orchestrator | Monday 08 September 2025 00:52:39 +0000 (0:00:01.469) 0:08:48.789 ****** 2025-09-08 00:55:15.597492 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597496 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597500 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597505 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597509 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597514 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597518 | orchestrator | 2025-09-08 00:55:15.597525 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:55:15.597530 | orchestrator | Monday 08 September 2025 00:52:40 +0000 (0:00:01.016) 0:08:49.806 ****** 2025-09-08 00:55:15.597534 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597539 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597543 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597548 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597552 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597557 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597561 | orchestrator | 2025-09-08 00:55:15.597566 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:55:15.597570 | orchestrator | Monday 08 September 2025 00:52:41 +0000 (0:00:00.888) 0:08:50.694 ****** 2025-09-08 00:55:15.597575 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597579 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597584 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597588 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597593 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597597 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597602 | orchestrator | 2025-09-08 00:55:15.597606 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:55:15.597611 | orchestrator | Monday 08 September 2025 00:52:42 +0000 (0:00:00.624) 0:08:51.319 ****** 2025-09-08 00:55:15.597615 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597620 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597624 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597629 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597633 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597638 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597642 | orchestrator | 2025-09-08 00:55:15.597647 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:55:15.597651 | orchestrator | Monday 08 September 2025 00:52:43 +0000 (0:00:01.043) 0:08:52.362 ****** 2025-09-08 00:55:15.597656 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597660 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597665 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597669 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597674 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597678 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597683 | orchestrator | 2025-09-08 00:55:15.597687 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:55:15.597692 | orchestrator | Monday 08 September 2025 00:52:43 +0000 (0:00:00.616) 0:08:52.979 ****** 2025-09-08 00:55:15.597696 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597701 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597705 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597710 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597722 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597726 | orchestrator | 2025-09-08 00:55:15.597731 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:55:15.597735 | orchestrator | Monday 08 September 2025 00:52:44 +0000 (0:00:00.854) 0:08:53.833 ****** 2025-09-08 00:55:15.597740 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597744 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597749 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597753 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597758 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597762 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597767 | orchestrator | 2025-09-08 00:55:15.597771 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:55:15.597776 | orchestrator | Monday 08 September 2025 00:52:45 +0000 (0:00:00.696) 0:08:54.530 ****** 2025-09-08 00:55:15.597780 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597785 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597789 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597794 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:15.597798 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:15.597803 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:15.597807 | orchestrator | 2025-09-08 00:55:15.597811 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:55:15.597816 | orchestrator | Monday 08 September 2025 00:52:46 +0000 (0:00:00.933) 0:08:55.463 ****** 2025-09-08 00:55:15.597820 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.597825 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.597829 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.597834 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597838 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597843 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597847 | orchestrator | 2025-09-08 00:55:15.597854 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:55:15.597858 | orchestrator | Monday 08 September 2025 00:52:46 +0000 (0:00:00.609) 0:08:56.073 ****** 2025-09-08 00:55:15.597863 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597867 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597872 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597876 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597881 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597885 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597890 | orchestrator | 2025-09-08 00:55:15.597894 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:55:15.597899 | orchestrator | Monday 08 September 2025 00:52:47 +0000 (0:00:01.014) 0:08:57.088 ****** 2025-09-08 00:55:15.597903 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.597908 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.597912 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.597916 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.597921 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.597925 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.597930 | orchestrator | 2025-09-08 00:55:15.597934 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-08 00:55:15.597939 | orchestrator | Monday 08 September 2025 00:52:49 +0000 (0:00:01.259) 0:08:58.347 ****** 2025-09-08 00:55:15.597943 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.597948 | orchestrator | 2025-09-08 00:55:15.597952 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-08 00:55:15.597959 | orchestrator | Monday 08 September 2025 00:52:53 +0000 (0:00:04.134) 0:09:02.482 ****** 2025-09-08 00:55:15.597964 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.597968 | orchestrator | 2025-09-08 00:55:15.597973 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-08 00:55:15.597981 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:02.014) 0:09:04.496 ****** 2025-09-08 00:55:15.597985 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.597990 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.597994 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.597999 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.598003 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.598008 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.598034 | orchestrator | 2025-09-08 00:55:15.598040 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-08 00:55:15.598045 | orchestrator | Monday 08 September 2025 00:52:56 +0000 (0:00:01.474) 0:09:05.971 ****** 2025-09-08 00:55:15.598050 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.598054 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.598059 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.598063 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.598068 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.598072 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.598076 | orchestrator | 2025-09-08 00:55:15.598081 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-08 00:55:15.598086 | orchestrator | Monday 08 September 2025 00:52:58 +0000 (0:00:01.405) 0:09:07.376 ****** 2025-09-08 00:55:15.598090 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.598095 | orchestrator | 2025-09-08 00:55:15.598100 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-08 00:55:15.598104 | orchestrator | Monday 08 September 2025 00:52:59 +0000 (0:00:01.236) 0:09:08.613 ****** 2025-09-08 00:55:15.598109 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.598113 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.598118 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.598122 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.598126 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.598131 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.598135 | orchestrator | 2025-09-08 00:55:15.598140 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-08 00:55:15.598144 | orchestrator | Monday 08 September 2025 00:53:01 +0000 (0:00:01.628) 0:09:10.241 ****** 2025-09-08 00:55:15.598149 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.598153 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.598158 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.598162 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.598167 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.598171 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.598176 | orchestrator | 2025-09-08 00:55:15.598180 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-08 00:55:15.598185 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:03.825) 0:09:14.066 ****** 2025-09-08 00:55:15.598189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:15.598194 | orchestrator | 2025-09-08 00:55:15.598198 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-08 00:55:15.598203 | orchestrator | Monday 08 September 2025 00:53:06 +0000 (0:00:01.135) 0:09:15.202 ****** 2025-09-08 00:55:15.598207 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598212 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598216 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598221 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.598225 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.598230 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.598234 | orchestrator | 2025-09-08 00:55:15.598239 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-08 00:55:15.598243 | orchestrator | Monday 08 September 2025 00:53:06 +0000 (0:00:00.607) 0:09:15.810 ****** 2025-09-08 00:55:15.598251 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.598256 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.598261 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.598265 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:15.598269 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:15.598274 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:15.598278 | orchestrator | 2025-09-08 00:55:15.598285 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-08 00:55:15.598290 | orchestrator | Monday 08 September 2025 00:53:09 +0000 (0:00:02.328) 0:09:18.139 ****** 2025-09-08 00:55:15.598294 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598299 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598303 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598308 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:15.598312 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:15.598317 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:15.598321 | orchestrator | 2025-09-08 00:55:15.598326 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-08 00:55:15.598330 | orchestrator | 2025-09-08 00:55:15.598335 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:55:15.598339 | orchestrator | Monday 08 September 2025 00:53:09 +0000 (0:00:00.908) 0:09:19.048 ****** 2025-09-08 00:55:15.598344 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.598349 | orchestrator | 2025-09-08 00:55:15.598353 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:55:15.598358 | orchestrator | Monday 08 September 2025 00:53:10 +0000 (0:00:00.872) 0:09:19.920 ****** 2025-09-08 00:55:15.598362 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.598367 | orchestrator | 2025-09-08 00:55:15.598374 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:55:15.598379 | orchestrator | Monday 08 September 2025 00:53:11 +0000 (0:00:00.503) 0:09:20.423 ****** 2025-09-08 00:55:15.598383 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598388 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598392 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598397 | orchestrator | 2025-09-08 00:55:15.598401 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:55:15.598406 | orchestrator | Monday 08 September 2025 00:53:11 +0000 (0:00:00.576) 0:09:21.000 ****** 2025-09-08 00:55:15.598410 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598415 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598419 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598423 | orchestrator | 2025-09-08 00:55:15.598428 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:55:15.598433 | orchestrator | Monday 08 September 2025 00:53:12 +0000 (0:00:00.685) 0:09:21.685 ****** 2025-09-08 00:55:15.598437 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598441 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598446 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598450 | orchestrator | 2025-09-08 00:55:15.598477 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:55:15.598483 | orchestrator | Monday 08 September 2025 00:53:13 +0000 (0:00:00.695) 0:09:22.380 ****** 2025-09-08 00:55:15.598487 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598492 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598496 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598501 | orchestrator | 2025-09-08 00:55:15.598505 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:55:15.598510 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:00.794) 0:09:23.175 ****** 2025-09-08 00:55:15.598515 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598523 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598527 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598532 | orchestrator | 2025-09-08 00:55:15.598536 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:55:15.598541 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:00.557) 0:09:23.732 ****** 2025-09-08 00:55:15.598545 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598549 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598554 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598558 | orchestrator | 2025-09-08 00:55:15.598563 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:55:15.598568 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:00.310) 0:09:24.042 ****** 2025-09-08 00:55:15.598572 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598576 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598580 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598584 | orchestrator | 2025-09-08 00:55:15.598588 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:55:15.598592 | orchestrator | Monday 08 September 2025 00:53:15 +0000 (0:00:00.352) 0:09:24.395 ****** 2025-09-08 00:55:15.598596 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598600 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598604 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598608 | orchestrator | 2025-09-08 00:55:15.598612 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:55:15.598616 | orchestrator | Monday 08 September 2025 00:53:16 +0000 (0:00:00.842) 0:09:25.238 ****** 2025-09-08 00:55:15.598621 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598625 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598629 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598633 | orchestrator | 2025-09-08 00:55:15.598637 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:55:15.598641 | orchestrator | Monday 08 September 2025 00:53:17 +0000 (0:00:01.104) 0:09:26.342 ****** 2025-09-08 00:55:15.598645 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598649 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598653 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598657 | orchestrator | 2025-09-08 00:55:15.598661 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:55:15.598665 | orchestrator | Monday 08 September 2025 00:53:17 +0000 (0:00:00.309) 0:09:26.652 ****** 2025-09-08 00:55:15.598669 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598673 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598677 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598682 | orchestrator | 2025-09-08 00:55:15.598686 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:55:15.598692 | orchestrator | Monday 08 September 2025 00:53:17 +0000 (0:00:00.314) 0:09:26.967 ****** 2025-09-08 00:55:15.598696 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598700 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598704 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598708 | orchestrator | 2025-09-08 00:55:15.598712 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:55:15.598716 | orchestrator | Monday 08 September 2025 00:53:18 +0000 (0:00:00.377) 0:09:27.344 ****** 2025-09-08 00:55:15.598721 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598725 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598729 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598733 | orchestrator | 2025-09-08 00:55:15.598737 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:55:15.598741 | orchestrator | Monday 08 September 2025 00:53:18 +0000 (0:00:00.588) 0:09:27.933 ****** 2025-09-08 00:55:15.598745 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598749 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598753 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598760 | orchestrator | 2025-09-08 00:55:15.598764 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:55:15.598768 | orchestrator | Monday 08 September 2025 00:53:19 +0000 (0:00:00.353) 0:09:28.287 ****** 2025-09-08 00:55:15.598772 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598776 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598780 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598785 | orchestrator | 2025-09-08 00:55:15.598789 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:55:15.598795 | orchestrator | Monday 08 September 2025 00:53:19 +0000 (0:00:00.299) 0:09:28.586 ****** 2025-09-08 00:55:15.598799 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598804 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598808 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598812 | orchestrator | 2025-09-08 00:55:15.598816 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:55:15.598820 | orchestrator | Monday 08 September 2025 00:53:19 +0000 (0:00:00.323) 0:09:28.910 ****** 2025-09-08 00:55:15.598824 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598828 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598832 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598836 | orchestrator | 2025-09-08 00:55:15.598840 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:55:15.598844 | orchestrator | Monday 08 September 2025 00:53:20 +0000 (0:00:00.597) 0:09:29.508 ****** 2025-09-08 00:55:15.598848 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598852 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598856 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598860 | orchestrator | 2025-09-08 00:55:15.598865 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:55:15.598869 | orchestrator | Monday 08 September 2025 00:53:20 +0000 (0:00:00.365) 0:09:29.873 ****** 2025-09-08 00:55:15.598873 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.598877 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.598881 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.598885 | orchestrator | 2025-09-08 00:55:15.598889 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-08 00:55:15.598893 | orchestrator | Monday 08 September 2025 00:53:21 +0000 (0:00:00.595) 0:09:30.469 ****** 2025-09-08 00:55:15.598897 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.598901 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.598905 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-08 00:55:15.598910 | orchestrator | 2025-09-08 00:55:15.598914 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-08 00:55:15.598918 | orchestrator | Monday 08 September 2025 00:53:22 +0000 (0:00:00.720) 0:09:31.189 ****** 2025-09-08 00:55:15.598922 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.598926 | orchestrator | 2025-09-08 00:55:15.598930 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-08 00:55:15.598934 | orchestrator | Monday 08 September 2025 00:53:24 +0000 (0:00:02.346) 0:09:33.536 ****** 2025-09-08 00:55:15.598939 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-08 00:55:15.598944 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.598948 | orchestrator | 2025-09-08 00:55:15.598953 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-08 00:55:15.598957 | orchestrator | Monday 08 September 2025 00:53:24 +0000 (0:00:00.238) 0:09:33.775 ****** 2025-09-08 00:55:15.598961 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:55:15.598972 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:55:15.598976 | orchestrator | 2025-09-08 00:55:15.598981 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-08 00:55:15.598985 | orchestrator | Monday 08 September 2025 00:53:32 +0000 (0:00:07.793) 0:09:41.568 ****** 2025-09-08 00:55:15.598989 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:55:15.598993 | orchestrator | 2025-09-08 00:55:15.598997 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-08 00:55:15.599003 | orchestrator | Monday 08 September 2025 00:53:36 +0000 (0:00:03.850) 0:09:45.419 ****** 2025-09-08 00:55:15.599007 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.599011 | orchestrator | 2025-09-08 00:55:15.599015 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-08 00:55:15.599019 | orchestrator | Monday 08 September 2025 00:53:37 +0000 (0:00:01.009) 0:09:46.428 ****** 2025-09-08 00:55:15.599023 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-08 00:55:15.599027 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-08 00:55:15.599031 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-08 00:55:15.599035 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-08 00:55:15.599039 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-08 00:55:15.599044 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-08 00:55:15.599048 | orchestrator | 2025-09-08 00:55:15.599052 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-08 00:55:15.599056 | orchestrator | Monday 08 September 2025 00:53:38 +0000 (0:00:01.184) 0:09:47.613 ****** 2025-09-08 00:55:15.599060 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.599066 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:55:15.599070 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:55:15.599074 | orchestrator | 2025-09-08 00:55:15.599078 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:55:15.599082 | orchestrator | Monday 08 September 2025 00:53:40 +0000 (0:00:02.206) 0:09:49.819 ****** 2025-09-08 00:55:15.599087 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:55:15.599091 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:55:15.599095 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599099 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:55:15.599103 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-08 00:55:15.599107 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599111 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:55:15.599115 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-08 00:55:15.599119 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599123 | orchestrator | 2025-09-08 00:55:15.599127 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-08 00:55:15.599131 | orchestrator | Monday 08 September 2025 00:53:41 +0000 (0:00:01.243) 0:09:51.063 ****** 2025-09-08 00:55:15.599135 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599139 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599143 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599147 | orchestrator | 2025-09-08 00:55:15.599151 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-08 00:55:15.599158 | orchestrator | Monday 08 September 2025 00:53:44 +0000 (0:00:02.720) 0:09:53.783 ****** 2025-09-08 00:55:15.599162 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599166 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599170 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599174 | orchestrator | 2025-09-08 00:55:15.599178 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-08 00:55:15.599182 | orchestrator | Monday 08 September 2025 00:53:45 +0000 (0:00:00.668) 0:09:54.451 ****** 2025-09-08 00:55:15.599187 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.599191 | orchestrator | 2025-09-08 00:55:15.599195 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-08 00:55:15.599199 | orchestrator | Monday 08 September 2025 00:53:46 +0000 (0:00:00.726) 0:09:55.178 ****** 2025-09-08 00:55:15.599203 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.599207 | orchestrator | 2025-09-08 00:55:15.599211 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-08 00:55:15.599215 | orchestrator | Monday 08 September 2025 00:53:47 +0000 (0:00:01.163) 0:09:56.341 ****** 2025-09-08 00:55:15.599219 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599223 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599227 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599231 | orchestrator | 2025-09-08 00:55:15.599235 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-08 00:55:15.599239 | orchestrator | Monday 08 September 2025 00:53:48 +0000 (0:00:01.384) 0:09:57.726 ****** 2025-09-08 00:55:15.599243 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599247 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599251 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599256 | orchestrator | 2025-09-08 00:55:15.599260 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-08 00:55:15.599264 | orchestrator | Monday 08 September 2025 00:53:49 +0000 (0:00:01.213) 0:09:58.939 ****** 2025-09-08 00:55:15.599268 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599272 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599276 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599280 | orchestrator | 2025-09-08 00:55:15.599284 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-08 00:55:15.599288 | orchestrator | Monday 08 September 2025 00:53:51 +0000 (0:00:01.747) 0:10:00.686 ****** 2025-09-08 00:55:15.599292 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599296 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599300 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599304 | orchestrator | 2025-09-08 00:55:15.599308 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-08 00:55:15.599312 | orchestrator | Monday 08 September 2025 00:53:53 +0000 (0:00:02.245) 0:10:02.932 ****** 2025-09-08 00:55:15.599318 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599322 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599326 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599330 | orchestrator | 2025-09-08 00:55:15.599334 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:55:15.599338 | orchestrator | Monday 08 September 2025 00:53:55 +0000 (0:00:01.248) 0:10:04.181 ****** 2025-09-08 00:55:15.599343 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599347 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599351 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599355 | orchestrator | 2025-09-08 00:55:15.599359 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-08 00:55:15.599363 | orchestrator | Monday 08 September 2025 00:53:56 +0000 (0:00:01.003) 0:10:05.185 ****** 2025-09-08 00:55:15.599371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.599376 | orchestrator | 2025-09-08 00:55:15.599380 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-08 00:55:15.599384 | orchestrator | Monday 08 September 2025 00:53:56 +0000 (0:00:00.549) 0:10:05.734 ****** 2025-09-08 00:55:15.599388 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599392 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599396 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599400 | orchestrator | 2025-09-08 00:55:15.599404 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-08 00:55:15.599410 | orchestrator | Monday 08 September 2025 00:53:56 +0000 (0:00:00.322) 0:10:06.057 ****** 2025-09-08 00:55:15.599414 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.599419 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.599423 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.599427 | orchestrator | 2025-09-08 00:55:15.599431 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-08 00:55:15.599435 | orchestrator | Monday 08 September 2025 00:53:58 +0000 (0:00:01.459) 0:10:07.517 ****** 2025-09-08 00:55:15.599439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.599443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.599447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.599451 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599465 | orchestrator | 2025-09-08 00:55:15.599469 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-08 00:55:15.599473 | orchestrator | Monday 08 September 2025 00:53:59 +0000 (0:00:00.606) 0:10:08.124 ****** 2025-09-08 00:55:15.599477 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599482 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599486 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599490 | orchestrator | 2025-09-08 00:55:15.599494 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-08 00:55:15.599498 | orchestrator | 2025-09-08 00:55:15.599502 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:55:15.599506 | orchestrator | Monday 08 September 2025 00:53:59 +0000 (0:00:00.541) 0:10:08.665 ****** 2025-09-08 00:55:15.599510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.599514 | orchestrator | 2025-09-08 00:55:15.599518 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:55:15.599522 | orchestrator | Monday 08 September 2025 00:54:00 +0000 (0:00:00.746) 0:10:09.411 ****** 2025-09-08 00:55:15.599526 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.599531 | orchestrator | 2025-09-08 00:55:15.599535 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:55:15.599539 | orchestrator | Monday 08 September 2025 00:54:00 +0000 (0:00:00.528) 0:10:09.939 ****** 2025-09-08 00:55:15.599543 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599547 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599551 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599555 | orchestrator | 2025-09-08 00:55:15.599559 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:55:15.599563 | orchestrator | Monday 08 September 2025 00:54:01 +0000 (0:00:00.527) 0:10:10.467 ****** 2025-09-08 00:55:15.599567 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599571 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599575 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599579 | orchestrator | 2025-09-08 00:55:15.599583 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:55:15.599587 | orchestrator | Monday 08 September 2025 00:54:02 +0000 (0:00:00.740) 0:10:11.208 ****** 2025-09-08 00:55:15.599595 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599599 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599603 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599607 | orchestrator | 2025-09-08 00:55:15.599611 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:55:15.599615 | orchestrator | Monday 08 September 2025 00:54:02 +0000 (0:00:00.724) 0:10:11.933 ****** 2025-09-08 00:55:15.599619 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599623 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599627 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599631 | orchestrator | 2025-09-08 00:55:15.599635 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:55:15.599639 | orchestrator | Monday 08 September 2025 00:54:03 +0000 (0:00:00.720) 0:10:12.653 ****** 2025-09-08 00:55:15.599643 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599647 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599651 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599656 | orchestrator | 2025-09-08 00:55:15.599660 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:55:15.599664 | orchestrator | Monday 08 September 2025 00:54:04 +0000 (0:00:00.591) 0:10:13.245 ****** 2025-09-08 00:55:15.599668 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599672 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599678 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599682 | orchestrator | 2025-09-08 00:55:15.599686 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:55:15.599690 | orchestrator | Monday 08 September 2025 00:54:04 +0000 (0:00:00.309) 0:10:13.554 ****** 2025-09-08 00:55:15.599694 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599698 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599702 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599706 | orchestrator | 2025-09-08 00:55:15.599710 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:55:15.599715 | orchestrator | Monday 08 September 2025 00:54:04 +0000 (0:00:00.316) 0:10:13.871 ****** 2025-09-08 00:55:15.599719 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599723 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599727 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599731 | orchestrator | 2025-09-08 00:55:15.599735 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:55:15.599739 | orchestrator | Monday 08 September 2025 00:54:05 +0000 (0:00:00.829) 0:10:14.701 ****** 2025-09-08 00:55:15.599743 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599747 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599751 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599755 | orchestrator | 2025-09-08 00:55:15.599759 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:55:15.599763 | orchestrator | Monday 08 September 2025 00:54:06 +0000 (0:00:01.021) 0:10:15.722 ****** 2025-09-08 00:55:15.599767 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599774 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599779 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599783 | orchestrator | 2025-09-08 00:55:15.599787 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:55:15.599791 | orchestrator | Monday 08 September 2025 00:54:06 +0000 (0:00:00.307) 0:10:16.030 ****** 2025-09-08 00:55:15.599795 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599799 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599803 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599807 | orchestrator | 2025-09-08 00:55:15.599811 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:55:15.599815 | orchestrator | Monday 08 September 2025 00:54:07 +0000 (0:00:00.298) 0:10:16.328 ****** 2025-09-08 00:55:15.599819 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599826 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599830 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599834 | orchestrator | 2025-09-08 00:55:15.599839 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:55:15.599843 | orchestrator | Monday 08 September 2025 00:54:07 +0000 (0:00:00.363) 0:10:16.692 ****** 2025-09-08 00:55:15.599847 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599851 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599855 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599859 | orchestrator | 2025-09-08 00:55:15.599863 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:55:15.599867 | orchestrator | Monday 08 September 2025 00:54:08 +0000 (0:00:00.576) 0:10:17.268 ****** 2025-09-08 00:55:15.599871 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599875 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599879 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599883 | orchestrator | 2025-09-08 00:55:15.599887 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:55:15.599891 | orchestrator | Monday 08 September 2025 00:54:08 +0000 (0:00:00.326) 0:10:17.595 ****** 2025-09-08 00:55:15.599896 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599900 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599904 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599908 | orchestrator | 2025-09-08 00:55:15.599912 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:55:15.599916 | orchestrator | Monday 08 September 2025 00:54:08 +0000 (0:00:00.305) 0:10:17.901 ****** 2025-09-08 00:55:15.599920 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599924 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599928 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599932 | orchestrator | 2025-09-08 00:55:15.599936 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:55:15.599940 | orchestrator | Monday 08 September 2025 00:54:09 +0000 (0:00:00.299) 0:10:18.201 ****** 2025-09-08 00:55:15.599944 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.599948 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.599952 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.599956 | orchestrator | 2025-09-08 00:55:15.599961 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:55:15.599965 | orchestrator | Monday 08 September 2025 00:54:09 +0000 (0:00:00.539) 0:10:18.740 ****** 2025-09-08 00:55:15.599969 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599973 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.599977 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.599981 | orchestrator | 2025-09-08 00:55:15.599985 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:55:15.599989 | orchestrator | Monday 08 September 2025 00:54:09 +0000 (0:00:00.352) 0:10:19.093 ****** 2025-09-08 00:55:15.599993 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.599997 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.600001 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.600005 | orchestrator | 2025-09-08 00:55:15.600009 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-08 00:55:15.600013 | orchestrator | Monday 08 September 2025 00:54:10 +0000 (0:00:00.565) 0:10:19.658 ****** 2025-09-08 00:55:15.600018 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.600022 | orchestrator | 2025-09-08 00:55:15.600026 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-08 00:55:15.600030 | orchestrator | Monday 08 September 2025 00:54:11 +0000 (0:00:00.796) 0:10:20.455 ****** 2025-09-08 00:55:15.600034 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.600038 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:55:15.600044 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:55:15.600051 | orchestrator | 2025-09-08 00:55:15.600056 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:55:15.600060 | orchestrator | Monday 08 September 2025 00:54:13 +0000 (0:00:02.099) 0:10:22.555 ****** 2025-09-08 00:55:15.600064 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:55:15.600068 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:55:15.600072 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.600076 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:55:15.600080 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-08 00:55:15.600084 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.600088 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:55:15.600092 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-08 00:55:15.600096 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.600100 | orchestrator | 2025-09-08 00:55:15.600104 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-08 00:55:15.600108 | orchestrator | Monday 08 September 2025 00:54:14 +0000 (0:00:01.303) 0:10:23.859 ****** 2025-09-08 00:55:15.600112 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600116 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.600120 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.600124 | orchestrator | 2025-09-08 00:55:15.600128 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-08 00:55:15.600135 | orchestrator | Monday 08 September 2025 00:54:15 +0000 (0:00:00.327) 0:10:24.186 ****** 2025-09-08 00:55:15.600140 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.600144 | orchestrator | 2025-09-08 00:55:15.600148 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-08 00:55:15.600152 | orchestrator | Monday 08 September 2025 00:54:15 +0000 (0:00:00.781) 0:10:24.968 ****** 2025-09-08 00:55:15.600156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.600160 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.600164 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.600168 | orchestrator | 2025-09-08 00:55:15.600173 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-08 00:55:15.600177 | orchestrator | Monday 08 September 2025 00:54:16 +0000 (0:00:00.816) 0:10:25.784 ****** 2025-09-08 00:55:15.600181 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.600185 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-08 00:55:15.600189 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.600193 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-08 00:55:15.600197 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.600201 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-08 00:55:15.600205 | orchestrator | 2025-09-08 00:55:15.600209 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-08 00:55:15.600213 | orchestrator | Monday 08 September 2025 00:54:21 +0000 (0:00:04.750) 0:10:30.535 ****** 2025-09-08 00:55:15.600217 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.600224 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:55:15.600228 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.600232 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:55:15.600236 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:55:15.600240 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:55:15.600244 | orchestrator | 2025-09-08 00:55:15.600249 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:55:15.600253 | orchestrator | Monday 08 September 2025 00:54:24 +0000 (0:00:02.845) 0:10:33.381 ****** 2025-09-08 00:55:15.600257 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:55:15.600261 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.600265 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:55:15.600269 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.600273 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:55:15.600277 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.600281 | orchestrator | 2025-09-08 00:55:15.600285 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-08 00:55:15.600289 | orchestrator | Monday 08 September 2025 00:54:25 +0000 (0:00:01.286) 0:10:34.667 ****** 2025-09-08 00:55:15.600293 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-08 00:55:15.600297 | orchestrator | 2025-09-08 00:55:15.600301 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-08 00:55:15.600307 | orchestrator | Monday 08 September 2025 00:54:25 +0000 (0:00:00.235) 0:10:34.902 ****** 2025-09-08 00:55:15.600312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600332 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600336 | orchestrator | 2025-09-08 00:55:15.600340 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-08 00:55:15.600344 | orchestrator | Monday 08 September 2025 00:54:26 +0000 (0:00:00.586) 0:10:35.488 ****** 2025-09-08 00:55:15.600351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:55:15.600371 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600375 | orchestrator | 2025-09-08 00:55:15.600380 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-08 00:55:15.600387 | orchestrator | Monday 08 September 2025 00:54:27 +0000 (0:00:00.606) 0:10:36.095 ****** 2025-09-08 00:55:15.600391 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:55:15.600395 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:55:15.600399 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:55:15.600403 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:55:15.600407 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:55:15.600411 | orchestrator | 2025-09-08 00:55:15.600415 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-08 00:55:15.600420 | orchestrator | Monday 08 September 2025 00:54:59 +0000 (0:00:32.226) 0:11:08.321 ****** 2025-09-08 00:55:15.600424 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600428 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.600432 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.600436 | orchestrator | 2025-09-08 00:55:15.600440 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-08 00:55:15.600444 | orchestrator | Monday 08 September 2025 00:54:59 +0000 (0:00:00.340) 0:11:08.662 ****** 2025-09-08 00:55:15.600448 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600452 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.600466 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.600470 | orchestrator | 2025-09-08 00:55:15.600474 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-08 00:55:15.600478 | orchestrator | Monday 08 September 2025 00:55:00 +0000 (0:00:00.773) 0:11:09.435 ****** 2025-09-08 00:55:15.600483 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.600487 | orchestrator | 2025-09-08 00:55:15.600491 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-08 00:55:15.600495 | orchestrator | Monday 08 September 2025 00:55:00 +0000 (0:00:00.577) 0:11:10.012 ****** 2025-09-08 00:55:15.600499 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.600503 | orchestrator | 2025-09-08 00:55:15.600507 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-08 00:55:15.600511 | orchestrator | Monday 08 September 2025 00:55:01 +0000 (0:00:00.778) 0:11:10.791 ****** 2025-09-08 00:55:15.600515 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.600519 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.600523 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.600528 | orchestrator | 2025-09-08 00:55:15.600534 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-08 00:55:15.600538 | orchestrator | Monday 08 September 2025 00:55:03 +0000 (0:00:01.324) 0:11:12.116 ****** 2025-09-08 00:55:15.600542 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.600546 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.600550 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.600554 | orchestrator | 2025-09-08 00:55:15.600558 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-08 00:55:15.600562 | orchestrator | Monday 08 September 2025 00:55:04 +0000 (0:00:01.231) 0:11:13.348 ****** 2025-09-08 00:55:15.600566 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:55:15.600570 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:55:15.600574 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:55:15.600584 | orchestrator | 2025-09-08 00:55:15.600588 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-08 00:55:15.600592 | orchestrator | Monday 08 September 2025 00:55:06 +0000 (0:00:02.638) 0:11:15.986 ****** 2025-09-08 00:55:15.600596 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.600600 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.600607 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:55:15.600611 | orchestrator | 2025-09-08 00:55:15.600615 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:55:15.600619 | orchestrator | Monday 08 September 2025 00:55:09 +0000 (0:00:02.630) 0:11:18.617 ****** 2025-09-08 00:55:15.600623 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600627 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.600631 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.600635 | orchestrator | 2025-09-08 00:55:15.600640 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-08 00:55:15.600644 | orchestrator | Monday 08 September 2025 00:55:09 +0000 (0:00:00.324) 0:11:18.942 ****** 2025-09-08 00:55:15.600648 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:55:15.600652 | orchestrator | 2025-09-08 00:55:15.600656 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-08 00:55:15.600660 | orchestrator | Monday 08 September 2025 00:55:10 +0000 (0:00:00.818) 0:11:19.761 ****** 2025-09-08 00:55:15.600664 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.600668 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.600672 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.600676 | orchestrator | 2025-09-08 00:55:15.600681 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-08 00:55:15.600685 | orchestrator | Monday 08 September 2025 00:55:10 +0000 (0:00:00.318) 0:11:20.079 ****** 2025-09-08 00:55:15.600689 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600693 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:55:15.600697 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:55:15.600701 | orchestrator | 2025-09-08 00:55:15.600705 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-08 00:55:15.600709 | orchestrator | Monday 08 September 2025 00:55:11 +0000 (0:00:00.345) 0:11:20.425 ****** 2025-09-08 00:55:15.600713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:55:15.600717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:55:15.600721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:55:15.600725 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:55:15.600729 | orchestrator | 2025-09-08 00:55:15.600733 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-08 00:55:15.600737 | orchestrator | Monday 08 September 2025 00:55:12 +0000 (0:00:01.111) 0:11:21.536 ****** 2025-09-08 00:55:15.600742 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:55:15.600746 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:55:15.600750 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:55:15.600754 | orchestrator | 2025-09-08 00:55:15.600758 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:55:15.600762 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-08 00:55:15.600766 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-08 00:55:15.600774 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-08 00:55:15.600778 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-08 00:55:15.600782 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-08 00:55:15.600786 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-08 00:55:15.600790 | orchestrator | 2025-09-08 00:55:15.600794 | orchestrator | 2025-09-08 00:55:15.600798 | orchestrator | 2025-09-08 00:55:15.600803 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:55:15.600807 | orchestrator | Monday 08 September 2025 00:55:12 +0000 (0:00:00.268) 0:11:21.805 ****** 2025-09-08 00:55:15.600813 | orchestrator | =============================================================================== 2025-09-08 00:55:15.600817 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.65s 2025-09-08 00:55:15.600821 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.49s 2025-09-08 00:55:15.600825 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.23s 2025-09-08 00:55:15.600829 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.65s 2025-09-08 00:55:15.600833 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.88s 2025-09-08 00:55:15.600837 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.00s 2025-09-08 00:55:15.600841 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.10s 2025-09-08 00:55:15.600845 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.90s 2025-09-08 00:55:15.600849 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.12s 2025-09-08 00:55:15.600854 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.79s 2025-09-08 00:55:15.600858 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.33s 2025-09-08 00:55:15.600862 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.97s 2025-09-08 00:55:15.600868 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.75s 2025-09-08 00:55:15.600872 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.66s 2025-09-08 00:55:15.600876 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.13s 2025-09-08 00:55:15.600880 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.85s 2025-09-08 00:55:15.600884 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.83s 2025-09-08 00:55:15.600889 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.48s 2025-09-08 00:55:15.600893 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.48s 2025-09-08 00:55:15.600897 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.37s 2025-09-08 00:55:15.600901 | orchestrator | 2025-09-08 00:55:15 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:15.600905 | orchestrator | 2025-09-08 00:55:15 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:15.600909 | orchestrator | 2025-09-08 00:55:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:18.631327 | orchestrator | 2025-09-08 00:55:18 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:18.633930 | orchestrator | 2025-09-08 00:55:18 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:18.638284 | orchestrator | 2025-09-08 00:55:18 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:18.638641 | orchestrator | 2025-09-08 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:21.687947 | orchestrator | 2025-09-08 00:55:21 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:21.689703 | orchestrator | 2025-09-08 00:55:21 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:21.691615 | orchestrator | 2025-09-08 00:55:21 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:21.691647 | orchestrator | 2025-09-08 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:24.746870 | orchestrator | 2025-09-08 00:55:24 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:24.747310 | orchestrator | 2025-09-08 00:55:24 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:24.748784 | orchestrator | 2025-09-08 00:55:24 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:24.748887 | orchestrator | 2025-09-08 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:27.793187 | orchestrator | 2025-09-08 00:55:27 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:27.796963 | orchestrator | 2025-09-08 00:55:27 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:27.799020 | orchestrator | 2025-09-08 00:55:27 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:27.799046 | orchestrator | 2025-09-08 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:30.843679 | orchestrator | 2025-09-08 00:55:30 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:30.845862 | orchestrator | 2025-09-08 00:55:30 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:30.848262 | orchestrator | 2025-09-08 00:55:30 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:30.848799 | orchestrator | 2025-09-08 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:33.897445 | orchestrator | 2025-09-08 00:55:33 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:33.897724 | orchestrator | 2025-09-08 00:55:33 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:33.899765 | orchestrator | 2025-09-08 00:55:33 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:33.899792 | orchestrator | 2025-09-08 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:36.942292 | orchestrator | 2025-09-08 00:55:36 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:36.943319 | orchestrator | 2025-09-08 00:55:36 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:36.945068 | orchestrator | 2025-09-08 00:55:36 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:36.945115 | orchestrator | 2025-09-08 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:39.990612 | orchestrator | 2025-09-08 00:55:39 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:39.992597 | orchestrator | 2025-09-08 00:55:39 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:39.994586 | orchestrator | 2025-09-08 00:55:39 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:39.994723 | orchestrator | 2025-09-08 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:43.051659 | orchestrator | 2025-09-08 00:55:43 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:43.052773 | orchestrator | 2025-09-08 00:55:43 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:43.054893 | orchestrator | 2025-09-08 00:55:43 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:43.055274 | orchestrator | 2025-09-08 00:55:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:46.104695 | orchestrator | 2025-09-08 00:55:46 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:46.104935 | orchestrator | 2025-09-08 00:55:46 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:46.105274 | orchestrator | 2025-09-08 00:55:46 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:46.105299 | orchestrator | 2025-09-08 00:55:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:49.168169 | orchestrator | 2025-09-08 00:55:49 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:49.169559 | orchestrator | 2025-09-08 00:55:49 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:49.170970 | orchestrator | 2025-09-08 00:55:49 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:49.171095 | orchestrator | 2025-09-08 00:55:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:52.213052 | orchestrator | 2025-09-08 00:55:52 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:52.213937 | orchestrator | 2025-09-08 00:55:52 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:52.216908 | orchestrator | 2025-09-08 00:55:52 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:52.217671 | orchestrator | 2025-09-08 00:55:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:55.260211 | orchestrator | 2025-09-08 00:55:55 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:55.261150 | orchestrator | 2025-09-08 00:55:55 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:55.264062 | orchestrator | 2025-09-08 00:55:55 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:55.264088 | orchestrator | 2025-09-08 00:55:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:58.303652 | orchestrator | 2025-09-08 00:55:58 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:55:58.304543 | orchestrator | 2025-09-08 00:55:58 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:55:58.306889 | orchestrator | 2025-09-08 00:55:58 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:55:58.307120 | orchestrator | 2025-09-08 00:55:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:01.370898 | orchestrator | 2025-09-08 00:56:01 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:56:01.371977 | orchestrator | 2025-09-08 00:56:01 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:01.373884 | orchestrator | 2025-09-08 00:56:01 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state STARTED 2025-09-08 00:56:01.373976 | orchestrator | 2025-09-08 00:56:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:04.414600 | orchestrator | 2025-09-08 00:56:04 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:56:04.416270 | orchestrator | 2025-09-08 00:56:04 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:04.419248 | orchestrator | 2025-09-08 00:56:04 | INFO  | Task 26f04767-f108-40b7-81ca-a9aa79760e29 is in state SUCCESS 2025-09-08 00:56:04.421528 | orchestrator | 2025-09-08 00:56:04.421547 | orchestrator | 2025-09-08 00:56:04.421563 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:56:04.421567 | orchestrator | 2025-09-08 00:56:04.421572 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:56:04.421576 | orchestrator | Monday 08 September 2025 00:53:03 +0000 (0:00:00.315) 0:00:00.315 ****** 2025-09-08 00:56:04.421580 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:04.421585 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:04.421589 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:04.421593 | orchestrator | 2025-09-08 00:56:04.421597 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:56:04.421601 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:00.290) 0:00:00.606 ****** 2025-09-08 00:56:04.421606 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-08 00:56:04.421610 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-08 00:56:04.421614 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-08 00:56:04.421618 | orchestrator | 2025-09-08 00:56:04.421622 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-08 00:56:04.421625 | orchestrator | 2025-09-08 00:56:04.421629 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:56:04.421633 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:00.416) 0:00:01.022 ****** 2025-09-08 00:56:04.421637 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:04.421641 | orchestrator | 2025-09-08 00:56:04.421645 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-08 00:56:04.421649 | orchestrator | Monday 08 September 2025 00:53:05 +0000 (0:00:00.504) 0:00:01.526 ****** 2025-09-08 00:56:04.421653 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:56:04.421656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:56:04.421660 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:56:04.421664 | orchestrator | 2025-09-08 00:56:04.421668 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-08 00:56:04.421672 | orchestrator | Monday 08 September 2025 00:53:05 +0000 (0:00:00.675) 0:00:02.202 ****** 2025-09-08 00:56:04.421678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421737 | orchestrator | 2025-09-08 00:56:04.421741 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:56:04.421745 | orchestrator | Monday 08 September 2025 00:53:07 +0000 (0:00:01.486) 0:00:03.689 ****** 2025-09-08 00:56:04.421749 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:04.421753 | orchestrator | 2025-09-08 00:56:04.421757 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-08 00:56:04.421760 | orchestrator | Monday 08 September 2025 00:53:07 +0000 (0:00:00.518) 0:00:04.207 ****** 2025-09-08 00:56:04.421770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421805 | orchestrator | 2025-09-08 00:56:04.421809 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-08 00:56:04.421813 | orchestrator | Monday 08 September 2025 00:53:10 +0000 (0:00:02.467) 0:00:06.675 ****** 2025-09-08 00:56:04.421817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:56:04.421824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:56:04.421828 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:04.421832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:56:04.421842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:56:04.421846 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:04.421850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:56:04.421858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:56:04.421862 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:04.421866 | orchestrator | 2025-09-08 00:56:04.421869 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-08 00:56:04.421873 | orchestrator | Monday 08 September 2025 00:53:11 +0000 (0:00:01.442) 0:00:08.117 ****** 2025-09-08 00:56:04.421877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:56:04.421887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:56:04.421891 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:04.421895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:56:04.421904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:56:04.421908 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:04.421912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:56:04.421923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:56:04.421928 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:04.421931 | orchestrator | 2025-09-08 00:56:04.421935 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-08 00:56:04.421939 | orchestrator | Monday 08 September 2025 00:53:12 +0000 (0:00:01.191) 0:00:09.309 ****** 2025-09-08 00:56:04.421943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.421968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.421984 | orchestrator | 2025-09-08 00:56:04.421988 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-08 00:56:04.421991 | orchestrator | Monday 08 September 2025 00:53:15 +0000 (0:00:02.749) 0:00:12.058 ****** 2025-09-08 00:56:04.421995 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:04.421999 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:04.422003 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:04.422006 | orchestrator | 2025-09-08 00:56:04.422010 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-08 00:56:04.422041 | orchestrator | Monday 08 September 2025 00:53:19 +0000 (0:00:03.463) 0:00:15.522 ****** 2025-09-08 00:56:04.422046 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:04.422050 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:04.422053 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:04.422057 | orchestrator | 2025-09-08 00:56:04.422061 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-08 00:56:04.422065 | orchestrator | Monday 08 September 2025 00:53:21 +0000 (0:00:01.955) 0:00:17.478 ****** 2025-09-08 00:56:04.422069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.422226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.422241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:56:04.422246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.422252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.422263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:56:04.422272 | orchestrator | 2025-09-08 00:56:04.422277 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:56:04.422282 | orchestrator | Monday 08 September 2025 00:53:23 +0000 (0:00:02.326) 0:00:19.804 ****** 2025-09-08 00:56:04.422287 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:04.422291 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:04.422296 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:04.422300 | orchestrator | 2025-09-08 00:56:04.422305 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-08 00:56:04.422309 | orchestrator | Monday 08 September 2025 00:53:23 +0000 (0:00:00.301) 0:00:20.105 ****** 2025-09-08 00:56:04.422314 | orchestrator | 2025-09-08 00:56:04.422318 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-08 00:56:04.422323 | orchestrator | Monday 08 September 2025 00:53:23 +0000 (0:00:00.060) 0:00:20.166 ****** 2025-09-08 00:56:04.422328 | orchestrator | 2025-09-08 00:56:04.422332 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-08 00:56:04.422337 | orchestrator | Monday 08 September 2025 00:53:23 +0000 (0:00:00.066) 0:00:20.233 ****** 2025-09-08 00:56:04.422342 | orchestrator | 2025-09-08 00:56:04.422346 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-08 00:56:04.422351 | orchestrator | Monday 08 September 2025 00:53:23 +0000 (0:00:00.066) 0:00:20.299 ****** 2025-09-08 00:56:04.422355 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:04.422360 | orchestrator | 2025-09-08 00:56:04.422365 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-08 00:56:04.422369 | orchestrator | Monday 08 September 2025 00:53:24 +0000 (0:00:00.203) 0:00:20.503 ****** 2025-09-08 00:56:04.422374 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:04.422378 | orchestrator | 2025-09-08 00:56:04.422383 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-08 00:56:04.422387 | orchestrator | Monday 08 September 2025 00:53:24 +0000 (0:00:00.651) 0:00:21.154 ****** 2025-09-08 00:56:04.422392 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:04.422397 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:04.422402 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:04.422406 | orchestrator | 2025-09-08 00:56:04.422411 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-08 00:56:04.422416 | orchestrator | Monday 08 September 2025 00:54:32 +0000 (0:01:08.079) 0:01:29.233 ****** 2025-09-08 00:56:04.422421 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:04.422425 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:04.422430 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:04.422434 | orchestrator | 2025-09-08 00:56:04.422439 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:56:04.422458 | orchestrator | Monday 08 September 2025 00:55:51 +0000 (0:01:18.938) 0:02:48.172 ****** 2025-09-08 00:56:04.422462 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:04.422467 | orchestrator | 2025-09-08 00:56:04.422471 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-08 00:56:04.422475 | orchestrator | Monday 08 September 2025 00:55:52 +0000 (0:00:00.504) 0:02:48.676 ****** 2025-09-08 00:56:04.422480 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:04.422484 | orchestrator | 2025-09-08 00:56:04.422489 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-08 00:56:04.422493 | orchestrator | Monday 08 September 2025 00:55:54 +0000 (0:00:02.669) 0:02:51.346 ****** 2025-09-08 00:56:04.422498 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:04.422502 | orchestrator | 2025-09-08 00:56:04.422507 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-08 00:56:04.422512 | orchestrator | Monday 08 September 2025 00:55:57 +0000 (0:00:02.173) 0:02:53.519 ****** 2025-09-08 00:56:04.422516 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:04.422523 | orchestrator | 2025-09-08 00:56:04.422527 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-08 00:56:04.422531 | orchestrator | Monday 08 September 2025 00:55:59 +0000 (0:00:02.668) 0:02:56.188 ****** 2025-09-08 00:56:04.422535 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:04.422539 | orchestrator | 2025-09-08 00:56:04.422542 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:56:04.422547 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:56:04.422553 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:56:04.422557 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:56:04.422560 | orchestrator | 2025-09-08 00:56:04.422564 | orchestrator | 2025-09-08 00:56:04.422568 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:56:04.422576 | orchestrator | Monday 08 September 2025 00:56:02 +0000 (0:00:02.562) 0:02:58.750 ****** 2025-09-08 00:56:04.422580 | orchestrator | =============================================================================== 2025-09-08 00:56:04.422584 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.94s 2025-09-08 00:56:04.422588 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.08s 2025-09-08 00:56:04.422592 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.46s 2025-09-08 00:56:04.422595 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.75s 2025-09-08 00:56:04.422599 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.67s 2025-09-08 00:56:04.422603 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.67s 2025-09-08 00:56:04.422607 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.56s 2025-09-08 00:56:04.422610 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.47s 2025-09-08 00:56:04.422614 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.33s 2025-09-08 00:56:04.422618 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.17s 2025-09-08 00:56:04.422622 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.96s 2025-09-08 00:56:04.422625 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.49s 2025-09-08 00:56:04.422629 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.44s 2025-09-08 00:56:04.422633 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.19s 2025-09-08 00:56:04.422637 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2025-09-08 00:56:04.422640 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.65s 2025-09-08 00:56:04.422644 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-08 00:56:04.422648 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-09-08 00:56:04.422652 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2025-09-08 00:56:04.422655 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-08 00:56:04.422659 | orchestrator | 2025-09-08 00:56:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:07.472183 | orchestrator | 2025-09-08 00:56:07 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:56:07.474383 | orchestrator | 2025-09-08 00:56:07 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:07.474423 | orchestrator | 2025-09-08 00:56:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:10.521535 | orchestrator | 2025-09-08 00:56:10 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:56:10.523079 | orchestrator | 2025-09-08 00:56:10 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:10.523377 | orchestrator | 2025-09-08 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:13.567700 | orchestrator | 2025-09-08 00:56:13 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:56:13.568259 | orchestrator | 2025-09-08 00:56:13 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:13.568546 | orchestrator | 2025-09-08 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:16.611985 | orchestrator | 2025-09-08 00:56:16 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state STARTED 2025-09-08 00:56:16.612564 | orchestrator | 2025-09-08 00:56:16 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:16.612596 | orchestrator | 2025-09-08 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:19.657936 | orchestrator | 2025-09-08 00:56:19 | INFO  | Task ff3134a8-68be-44bf-b39e-32c4dc33efb7 is in state SUCCESS 2025-09-08 00:56:19.659533 | orchestrator | 2025-09-08 00:56:19.659576 | orchestrator | 2025-09-08 00:56:19.659588 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-08 00:56:19.659600 | orchestrator | 2025-09-08 00:56:19.659611 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-08 00:56:19.659623 | orchestrator | Monday 08 September 2025 00:53:03 +0000 (0:00:00.107) 0:00:00.107 ****** 2025-09-08 00:56:19.659634 | orchestrator | ok: [localhost] => { 2025-09-08 00:56:19.659647 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-08 00:56:19.659658 | orchestrator | } 2025-09-08 00:56:19.659670 | orchestrator | 2025-09-08 00:56:19.659681 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-08 00:56:19.659692 | orchestrator | Monday 08 September 2025 00:53:03 +0000 (0:00:00.061) 0:00:00.169 ****** 2025-09-08 00:56:19.659704 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-08 00:56:19.659717 | orchestrator | ...ignoring 2025-09-08 00:56:19.659729 | orchestrator | 2025-09-08 00:56:19.659757 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-08 00:56:19.659769 | orchestrator | Monday 08 September 2025 00:53:06 +0000 (0:00:02.919) 0:00:03.088 ****** 2025-09-08 00:56:19.659803 | orchestrator | skipping: [localhost] 2025-09-08 00:56:19.659815 | orchestrator | 2025-09-08 00:56:19.659826 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-08 00:56:19.659837 | orchestrator | Monday 08 September 2025 00:53:06 +0000 (0:00:00.067) 0:00:03.155 ****** 2025-09-08 00:56:19.659848 | orchestrator | ok: [localhost] 2025-09-08 00:56:19.659860 | orchestrator | 2025-09-08 00:56:19.659870 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:56:19.659881 | orchestrator | 2025-09-08 00:56:19.659892 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:56:19.659903 | orchestrator | Monday 08 September 2025 00:53:06 +0000 (0:00:00.194) 0:00:03.350 ****** 2025-09-08 00:56:19.659914 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.659925 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.659936 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.659946 | orchestrator | 2025-09-08 00:56:19.659957 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:56:19.659968 | orchestrator | Monday 08 September 2025 00:53:07 +0000 (0:00:00.319) 0:00:03.670 ****** 2025-09-08 00:56:19.660380 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-08 00:56:19.660414 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-08 00:56:19.660433 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-08 00:56:19.660482 | orchestrator | 2025-09-08 00:56:19.660500 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-08 00:56:19.660515 | orchestrator | 2025-09-08 00:56:19.660526 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-08 00:56:19.660536 | orchestrator | Monday 08 September 2025 00:53:07 +0000 (0:00:00.600) 0:00:04.270 ****** 2025-09-08 00:56:19.660547 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:56:19.660558 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-08 00:56:19.660569 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-08 00:56:19.660579 | orchestrator | 2025-09-08 00:56:19.660590 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:19.660600 | orchestrator | Monday 08 September 2025 00:53:08 +0000 (0:00:00.355) 0:00:04.625 ****** 2025-09-08 00:56:19.660611 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:19.660622 | orchestrator | 2025-09-08 00:56:19.660633 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-08 00:56:19.660643 | orchestrator | Monday 08 September 2025 00:53:08 +0000 (0:00:00.511) 0:00:05.136 ****** 2025-09-08 00:56:19.660676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.660703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.660731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.660744 | orchestrator | 2025-09-08 00:56:19.660764 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-08 00:56:19.660776 | orchestrator | Monday 08 September 2025 00:53:12 +0000 (0:00:03.425) 0:00:08.562 ****** 2025-09-08 00:56:19.660786 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.660798 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.660809 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.660819 | orchestrator | 2025-09-08 00:56:19.660830 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-08 00:56:19.660841 | orchestrator | Monday 08 September 2025 00:53:12 +0000 (0:00:00.729) 0:00:09.291 ****** 2025-09-08 00:56:19.660851 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.660862 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.660873 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.660884 | orchestrator | 2025-09-08 00:56:19.660894 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-08 00:56:19.660905 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:01.663) 0:00:10.955 ****** 2025-09-08 00:56:19.660931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.660951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.660969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.660989 | orchestrator | 2025-09-08 00:56:19.661000 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-08 00:56:19.661010 | orchestrator | Monday 08 September 2025 00:53:18 +0000 (0:00:04.246) 0:00:15.201 ****** 2025-09-08 00:56:19.661021 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.661032 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.661042 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.661053 | orchestrator | 2025-09-08 00:56:19.661064 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-08 00:56:19.661074 | orchestrator | Monday 08 September 2025 00:53:19 +0000 (0:00:01.230) 0:00:16.432 ****** 2025-09-08 00:56:19.661085 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:19.661095 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.661106 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:19.661117 | orchestrator | 2025-09-08 00:56:19.661127 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:19.661138 | orchestrator | Monday 08 September 2025 00:53:24 +0000 (0:00:04.658) 0:00:21.090 ****** 2025-09-08 00:56:19.661149 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:19.661160 | orchestrator | 2025-09-08 00:56:19.661170 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-08 00:56:19.661181 | orchestrator | Monday 08 September 2025 00:53:25 +0000 (0:00:00.582) 0:00:21.673 ****** 2025-09-08 00:56:19.661201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661219 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.661236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661248 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.661268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661287 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.661298 | orchestrator | 2025-09-08 00:56:19.661308 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-08 00:56:19.661319 | orchestrator | Monday 08 September 2025 00:53:28 +0000 (0:00:03.189) 0:00:24.863 ****** 2025-09-08 00:56:19.661337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661349 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.661365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661384 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.661400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661412 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.661423 | orchestrator | 2025-09-08 00:56:19.661434 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-08 00:56:19.661463 | orchestrator | Monday 08 September 2025 00:53:30 +0000 (0:00:02.387) 0:00:27.250 ****** 2025-09-08 00:56:19.661475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661512 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.661537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661550 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.661562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:19.661580 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.661591 | orchestrator | 2025-09-08 00:56:19.661601 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-08 00:56:19.661612 | orchestrator | Monday 08 September 2025 00:53:33 +0000 (0:00:03.113) 0:00:30.364 ****** 2025-09-08 00:56:19.661637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.661650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.661683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:19.661696 | orchestrator | 2025-09-08 00:56:19.661707 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-08 00:56:19.661717 | orchestrator | Monday 08 September 2025 00:53:37 +0000 (0:00:03.195) 0:00:33.560 ****** 2025-09-08 00:56:19.661728 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.661739 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:19.661750 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:19.661760 | orchestrator | 2025-09-08 00:56:19.661771 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-08 00:56:19.661782 | orchestrator | Monday 08 September 2025 00:53:37 +0000 (0:00:00.870) 0:00:34.430 ****** 2025-09-08 00:56:19.661890 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.661904 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.661914 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.661925 | orchestrator | 2025-09-08 00:56:19.661936 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-08 00:56:19.661946 | orchestrator | Monday 08 September 2025 00:53:38 +0000 (0:00:00.517) 0:00:34.948 ****** 2025-09-08 00:56:19.661957 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.661968 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.661978 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.661989 | orchestrator | 2025-09-08 00:56:19.662000 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-08 00:56:19.662011 | orchestrator | Monday 08 September 2025 00:53:38 +0000 (0:00:00.344) 0:00:35.293 ****** 2025-09-08 00:56:19.662080 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-08 00:56:19.662100 | orchestrator | ...ignoring 2025-09-08 00:56:19.662120 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-08 00:56:19.662132 | orchestrator | ...ignoring 2025-09-08 00:56:19.662143 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-08 00:56:19.662163 | orchestrator | ...ignoring 2025-09-08 00:56:19.662174 | orchestrator | 2025-09-08 00:56:19.662185 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-08 00:56:19.662196 | orchestrator | Monday 08 September 2025 00:53:49 +0000 (0:00:10.853) 0:00:46.146 ****** 2025-09-08 00:56:19.662206 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.662216 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.662227 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.662237 | orchestrator | 2025-09-08 00:56:19.662248 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-08 00:56:19.662259 | orchestrator | Monday 08 September 2025 00:53:50 +0000 (0:00:00.431) 0:00:46.577 ****** 2025-09-08 00:56:19.662269 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.662280 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.662290 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.662301 | orchestrator | 2025-09-08 00:56:19.662312 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-08 00:56:19.662322 | orchestrator | Monday 08 September 2025 00:53:50 +0000 (0:00:00.638) 0:00:47.216 ****** 2025-09-08 00:56:19.662333 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.662343 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.662354 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.662364 | orchestrator | 2025-09-08 00:56:19.662374 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-08 00:56:19.662385 | orchestrator | Monday 08 September 2025 00:53:51 +0000 (0:00:00.436) 0:00:47.653 ****** 2025-09-08 00:56:19.662396 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.662406 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.662417 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.662427 | orchestrator | 2025-09-08 00:56:19.662460 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-08 00:56:19.662472 | orchestrator | Monday 08 September 2025 00:53:51 +0000 (0:00:00.396) 0:00:48.049 ****** 2025-09-08 00:56:19.662482 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.662493 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.662503 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.662514 | orchestrator | 2025-09-08 00:56:19.662524 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-08 00:56:19.662535 | orchestrator | Monday 08 September 2025 00:53:52 +0000 (0:00:00.444) 0:00:48.494 ****** 2025-09-08 00:56:19.662555 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.662566 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.662577 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.662587 | orchestrator | 2025-09-08 00:56:19.662598 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:19.662609 | orchestrator | Monday 08 September 2025 00:53:52 +0000 (0:00:00.896) 0:00:49.391 ****** 2025-09-08 00:56:19.662619 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.662630 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.662641 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-08 00:56:19.662651 | orchestrator | 2025-09-08 00:56:19.662662 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-08 00:56:19.662672 | orchestrator | Monday 08 September 2025 00:53:53 +0000 (0:00:00.392) 0:00:49.784 ****** 2025-09-08 00:56:19.662683 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.662693 | orchestrator | 2025-09-08 00:56:19.662704 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-08 00:56:19.662714 | orchestrator | Monday 08 September 2025 00:54:04 +0000 (0:00:10.705) 0:01:00.489 ****** 2025-09-08 00:56:19.662731 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.662743 | orchestrator | 2025-09-08 00:56:19.662753 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:19.662770 | orchestrator | Monday 08 September 2025 00:54:04 +0000 (0:00:00.143) 0:01:00.633 ****** 2025-09-08 00:56:19.662781 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.662792 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.662802 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.662813 | orchestrator | 2025-09-08 00:56:19.662824 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-08 00:56:19.662834 | orchestrator | Monday 08 September 2025 00:54:05 +0000 (0:00:00.985) 0:01:01.619 ****** 2025-09-08 00:56:19.662845 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.662855 | orchestrator | 2025-09-08 00:56:19.662866 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-08 00:56:19.662876 | orchestrator | Monday 08 September 2025 00:54:13 +0000 (0:00:07.851) 0:01:09.471 ****** 2025-09-08 00:56:19.662887 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.662897 | orchestrator | 2025-09-08 00:56:19.662908 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-08 00:56:19.662919 | orchestrator | Monday 08 September 2025 00:54:14 +0000 (0:00:01.536) 0:01:11.007 ****** 2025-09-08 00:56:19.662929 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.662940 | orchestrator | 2025-09-08 00:56:19.662951 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-08 00:56:19.662961 | orchestrator | Monday 08 September 2025 00:54:17 +0000 (0:00:02.576) 0:01:13.584 ****** 2025-09-08 00:56:19.662972 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.662982 | orchestrator | 2025-09-08 00:56:19.662993 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-08 00:56:19.663004 | orchestrator | Monday 08 September 2025 00:54:17 +0000 (0:00:00.119) 0:01:13.703 ****** 2025-09-08 00:56:19.663015 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.663025 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.663035 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.663046 | orchestrator | 2025-09-08 00:56:19.663057 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-08 00:56:19.663067 | orchestrator | Monday 08 September 2025 00:54:17 +0000 (0:00:00.317) 0:01:14.021 ****** 2025-09-08 00:56:19.663078 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.663088 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-08 00:56:19.663099 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:19.663110 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:19.663120 | orchestrator | 2025-09-08 00:56:19.663131 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-08 00:56:19.663141 | orchestrator | skipping: no hosts matched 2025-09-08 00:56:19.663152 | orchestrator | 2025-09-08 00:56:19.663163 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-08 00:56:19.663173 | orchestrator | 2025-09-08 00:56:19.663184 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-08 00:56:19.663194 | orchestrator | Monday 08 September 2025 00:54:18 +0000 (0:00:00.546) 0:01:14.567 ****** 2025-09-08 00:56:19.663205 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:19.663216 | orchestrator | 2025-09-08 00:56:19.663226 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-08 00:56:19.663237 | orchestrator | Monday 08 September 2025 00:54:43 +0000 (0:00:25.297) 0:01:39.864 ****** 2025-09-08 00:56:19.663247 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.663258 | orchestrator | 2025-09-08 00:56:19.663269 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-08 00:56:19.663279 | orchestrator | Monday 08 September 2025 00:55:00 +0000 (0:00:16.605) 0:01:56.470 ****** 2025-09-08 00:56:19.663290 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.663300 | orchestrator | 2025-09-08 00:56:19.663311 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-08 00:56:19.663321 | orchestrator | 2025-09-08 00:56:19.663332 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-08 00:56:19.663349 | orchestrator | Monday 08 September 2025 00:55:02 +0000 (0:00:02.461) 0:01:58.932 ****** 2025-09-08 00:56:19.663359 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:19.663370 | orchestrator | 2025-09-08 00:56:19.663380 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-08 00:56:19.663391 | orchestrator | Monday 08 September 2025 00:55:27 +0000 (0:00:25.433) 0:02:24.365 ****** 2025-09-08 00:56:19.663401 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.663412 | orchestrator | 2025-09-08 00:56:19.663423 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-08 00:56:19.663433 | orchestrator | Monday 08 September 2025 00:55:43 +0000 (0:00:15.563) 0:02:39.929 ****** 2025-09-08 00:56:19.663470 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.663481 | orchestrator | 2025-09-08 00:56:19.663491 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-08 00:56:19.663502 | orchestrator | 2025-09-08 00:56:19.663519 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-08 00:56:19.663530 | orchestrator | Monday 08 September 2025 00:55:46 +0000 (0:00:02.568) 0:02:42.497 ****** 2025-09-08 00:56:19.663540 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.663551 | orchestrator | 2025-09-08 00:56:19.663561 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-08 00:56:19.663572 | orchestrator | Monday 08 September 2025 00:55:58 +0000 (0:00:12.076) 0:02:54.573 ****** 2025-09-08 00:56:19.663583 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.663593 | orchestrator | 2025-09-08 00:56:19.663604 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-08 00:56:19.663614 | orchestrator | Monday 08 September 2025 00:56:02 +0000 (0:00:04.565) 0:02:59.139 ****** 2025-09-08 00:56:19.663625 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.663635 | orchestrator | 2025-09-08 00:56:19.663646 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-08 00:56:19.663657 | orchestrator | 2025-09-08 00:56:19.663667 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-08 00:56:19.663683 | orchestrator | Monday 08 September 2025 00:56:05 +0000 (0:00:02.718) 0:03:01.857 ****** 2025-09-08 00:56:19.663694 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:19.663705 | orchestrator | 2025-09-08 00:56:19.663715 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-08 00:56:19.663726 | orchestrator | Monday 08 September 2025 00:56:05 +0000 (0:00:00.568) 0:03:02.425 ****** 2025-09-08 00:56:19.663736 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.663747 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.663757 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.663768 | orchestrator | 2025-09-08 00:56:19.663778 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-08 00:56:19.663789 | orchestrator | Monday 08 September 2025 00:56:08 +0000 (0:00:02.111) 0:03:04.536 ****** 2025-09-08 00:56:19.663799 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.663810 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.663821 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.663831 | orchestrator | 2025-09-08 00:56:19.663842 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-08 00:56:19.663852 | orchestrator | Monday 08 September 2025 00:56:10 +0000 (0:00:02.190) 0:03:06.726 ****** 2025-09-08 00:56:19.663863 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.663873 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.663884 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.663894 | orchestrator | 2025-09-08 00:56:19.663905 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-08 00:56:19.663916 | orchestrator | Monday 08 September 2025 00:56:12 +0000 (0:00:02.080) 0:03:08.807 ****** 2025-09-08 00:56:19.663926 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.663943 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.663954 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:19.663964 | orchestrator | 2025-09-08 00:56:19.663975 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-08 00:56:19.663986 | orchestrator | Monday 08 September 2025 00:56:14 +0000 (0:00:02.123) 0:03:10.930 ****** 2025-09-08 00:56:19.663997 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:19.664007 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:19.664018 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:19.664028 | orchestrator | 2025-09-08 00:56:19.664039 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-08 00:56:19.664050 | orchestrator | Monday 08 September 2025 00:56:17 +0000 (0:00:02.826) 0:03:13.757 ****** 2025-09-08 00:56:19.664061 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:19.664071 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:19.664082 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:19.664093 | orchestrator | 2025-09-08 00:56:19.664103 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:56:19.664114 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-08 00:56:19.664125 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-08 00:56:19.664138 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-08 00:56:19.664149 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-08 00:56:19.664285 | orchestrator | 2025-09-08 00:56:19.664305 | orchestrator | 2025-09-08 00:56:19.664324 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:56:19.664342 | orchestrator | Monday 08 September 2025 00:56:17 +0000 (0:00:00.336) 0:03:14.093 ****** 2025-09-08 00:56:19.664357 | orchestrator | =============================================================================== 2025-09-08 00:56:19.664367 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 50.73s 2025-09-08 00:56:19.664378 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.17s 2025-09-08 00:56:19.664389 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.08s 2025-09-08 00:56:19.664399 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.85s 2025-09-08 00:56:19.664410 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.71s 2025-09-08 00:56:19.664420 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.85s 2025-09-08 00:56:19.664457 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.03s 2025-09-08 00:56:19.664469 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.66s 2025-09-08 00:56:19.664480 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.57s 2025-09-08 00:56:19.664491 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.25s 2025-09-08 00:56:19.664501 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.43s 2025-09-08 00:56:19.664512 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.20s 2025-09-08 00:56:19.664522 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.19s 2025-09-08 00:56:19.664533 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.11s 2025-09-08 00:56:19.664543 | orchestrator | Check MariaDB service --------------------------------------------------- 2.92s 2025-09-08 00:56:19.664554 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.83s 2025-09-08 00:56:19.664575 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.72s 2025-09-08 00:56:19.664586 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.58s 2025-09-08 00:56:19.664596 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.39s 2025-09-08 00:56:19.664640 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.19s 2025-09-08 00:56:19.664652 | orchestrator | 2025-09-08 00:56:19 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:19.664663 | orchestrator | 2025-09-08 00:56:19 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:19.664674 | orchestrator | 2025-09-08 00:56:19 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:19.664853 | orchestrator | 2025-09-08 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:22.708409 | orchestrator | 2025-09-08 00:56:22 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:22.709720 | orchestrator | 2025-09-08 00:56:22 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:22.710787 | orchestrator | 2025-09-08 00:56:22 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:22.710816 | orchestrator | 2025-09-08 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:25.756938 | orchestrator | 2025-09-08 00:56:25 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:25.757382 | orchestrator | 2025-09-08 00:56:25 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:25.760217 | orchestrator | 2025-09-08 00:56:25 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:25.760251 | orchestrator | 2025-09-08 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:28.796963 | orchestrator | 2025-09-08 00:56:28 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:28.798995 | orchestrator | 2025-09-08 00:56:28 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:28.799289 | orchestrator | 2025-09-08 00:56:28 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:28.799314 | orchestrator | 2025-09-08 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:31.838755 | orchestrator | 2025-09-08 00:56:31 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:31.840210 | orchestrator | 2025-09-08 00:56:31 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:31.841339 | orchestrator | 2025-09-08 00:56:31 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:31.841425 | orchestrator | 2025-09-08 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:34.874267 | orchestrator | 2025-09-08 00:56:34 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:34.874366 | orchestrator | 2025-09-08 00:56:34 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:34.874378 | orchestrator | 2025-09-08 00:56:34 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:34.874387 | orchestrator | 2025-09-08 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:37.921470 | orchestrator | 2025-09-08 00:56:37 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:37.922779 | orchestrator | 2025-09-08 00:56:37 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:37.924011 | orchestrator | 2025-09-08 00:56:37 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:37.924041 | orchestrator | 2025-09-08 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:40.976854 | orchestrator | 2025-09-08 00:56:40 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:40.977103 | orchestrator | 2025-09-08 00:56:40 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:40.977758 | orchestrator | 2025-09-08 00:56:40 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:40.977958 | orchestrator | 2025-09-08 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:44.014291 | orchestrator | 2025-09-08 00:56:44 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:44.014662 | orchestrator | 2025-09-08 00:56:44 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:44.015567 | orchestrator | 2025-09-08 00:56:44 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:44.015678 | orchestrator | 2025-09-08 00:56:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:47.055979 | orchestrator | 2025-09-08 00:56:47 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:47.056092 | orchestrator | 2025-09-08 00:56:47 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:47.056107 | orchestrator | 2025-09-08 00:56:47 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:47.056119 | orchestrator | 2025-09-08 00:56:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:50.089349 | orchestrator | 2025-09-08 00:56:50 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:50.089584 | orchestrator | 2025-09-08 00:56:50 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:50.090607 | orchestrator | 2025-09-08 00:56:50 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:50.090860 | orchestrator | 2025-09-08 00:56:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:53.138524 | orchestrator | 2025-09-08 00:56:53 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:53.140155 | orchestrator | 2025-09-08 00:56:53 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:53.142242 | orchestrator | 2025-09-08 00:56:53 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:53.142528 | orchestrator | 2025-09-08 00:56:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:56.193727 | orchestrator | 2025-09-08 00:56:56 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:56.194678 | orchestrator | 2025-09-08 00:56:56 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:56.195878 | orchestrator | 2025-09-08 00:56:56 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:56.195911 | orchestrator | 2025-09-08 00:56:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:59.237176 | orchestrator | 2025-09-08 00:56:59 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:56:59.239402 | orchestrator | 2025-09-08 00:56:59 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:56:59.241655 | orchestrator | 2025-09-08 00:56:59 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:56:59.241833 | orchestrator | 2025-09-08 00:56:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:02.284551 | orchestrator | 2025-09-08 00:57:02 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:02.286681 | orchestrator | 2025-09-08 00:57:02 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:02.287818 | orchestrator | 2025-09-08 00:57:02 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:02.287851 | orchestrator | 2025-09-08 00:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:05.347936 | orchestrator | 2025-09-08 00:57:05 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:05.348606 | orchestrator | 2025-09-08 00:57:05 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:05.351642 | orchestrator | 2025-09-08 00:57:05 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:05.351681 | orchestrator | 2025-09-08 00:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:08.391351 | orchestrator | 2025-09-08 00:57:08 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:08.391985 | orchestrator | 2025-09-08 00:57:08 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:08.393310 | orchestrator | 2025-09-08 00:57:08 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:08.393327 | orchestrator | 2025-09-08 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:11.444224 | orchestrator | 2025-09-08 00:57:11 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:11.444750 | orchestrator | 2025-09-08 00:57:11 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:11.445569 | orchestrator | 2025-09-08 00:57:11 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:11.445596 | orchestrator | 2025-09-08 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:14.487573 | orchestrator | 2025-09-08 00:57:14 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:14.488928 | orchestrator | 2025-09-08 00:57:14 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:14.490630 | orchestrator | 2025-09-08 00:57:14 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:14.490658 | orchestrator | 2025-09-08 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:17.529134 | orchestrator | 2025-09-08 00:57:17 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:17.530367 | orchestrator | 2025-09-08 00:57:17 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:17.533068 | orchestrator | 2025-09-08 00:57:17 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:17.533283 | orchestrator | 2025-09-08 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:20.587528 | orchestrator | 2025-09-08 00:57:20 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:20.587637 | orchestrator | 2025-09-08 00:57:20 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:20.588285 | orchestrator | 2025-09-08 00:57:20 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:20.588524 | orchestrator | 2025-09-08 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:23.635715 | orchestrator | 2025-09-08 00:57:23 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:23.637716 | orchestrator | 2025-09-08 00:57:23 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:23.639645 | orchestrator | 2025-09-08 00:57:23 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:23.639673 | orchestrator | 2025-09-08 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:26.684710 | orchestrator | 2025-09-08 00:57:26 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:26.686202 | orchestrator | 2025-09-08 00:57:26 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state STARTED 2025-09-08 00:57:26.687949 | orchestrator | 2025-09-08 00:57:26 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:26.687974 | orchestrator | 2025-09-08 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:29.747184 | orchestrator | 2025-09-08 00:57:29 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:29.749104 | orchestrator | 2025-09-08 00:57:29 | INFO  | Task 9ec5bf01-d9eb-4572-9b4e-43373cf7b73c is in state SUCCESS 2025-09-08 00:57:29.751321 | orchestrator | 2025-09-08 00:57:29.751344 | orchestrator | 2025-09-08 00:57:29.751350 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-08 00:57:29.751357 | orchestrator | 2025-09-08 00:57:29.751363 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-08 00:57:29.751369 | orchestrator | Monday 08 September 2025 00:55:17 +0000 (0:00:00.597) 0:00:00.597 ****** 2025-09-08 00:57:29.751375 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:57:29.751382 | orchestrator | 2025-09-08 00:57:29.751388 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-08 00:57:29.751394 | orchestrator | Monday 08 September 2025 00:55:18 +0000 (0:00:00.624) 0:00:01.222 ****** 2025-09-08 00:57:29.751400 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751407 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751413 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751437 | orchestrator | 2025-09-08 00:57:29.751444 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-08 00:57:29.751449 | orchestrator | Monday 08 September 2025 00:55:18 +0000 (0:00:00.638) 0:00:01.860 ****** 2025-09-08 00:57:29.751455 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751460 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751466 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751471 | orchestrator | 2025-09-08 00:57:29.751477 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-08 00:57:29.751483 | orchestrator | Monday 08 September 2025 00:55:19 +0000 (0:00:00.288) 0:00:02.149 ****** 2025-09-08 00:57:29.751488 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751494 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751499 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751504 | orchestrator | 2025-09-08 00:57:29.751510 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-08 00:57:29.751515 | orchestrator | Monday 08 September 2025 00:55:19 +0000 (0:00:00.785) 0:00:02.934 ****** 2025-09-08 00:57:29.751539 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751544 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751550 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751555 | orchestrator | 2025-09-08 00:57:29.751561 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-08 00:57:29.751566 | orchestrator | Monday 08 September 2025 00:55:20 +0000 (0:00:00.300) 0:00:03.235 ****** 2025-09-08 00:57:29.751591 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751597 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751602 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751607 | orchestrator | 2025-09-08 00:57:29.751612 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-08 00:57:29.751618 | orchestrator | Monday 08 September 2025 00:55:20 +0000 (0:00:00.307) 0:00:03.542 ****** 2025-09-08 00:57:29.751623 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751628 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751633 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751639 | orchestrator | 2025-09-08 00:57:29.751644 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-08 00:57:29.751649 | orchestrator | Monday 08 September 2025 00:55:20 +0000 (0:00:00.297) 0:00:03.839 ****** 2025-09-08 00:57:29.751655 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.751661 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.751667 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.751672 | orchestrator | 2025-09-08 00:57:29.751677 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-08 00:57:29.751682 | orchestrator | Monday 08 September 2025 00:55:21 +0000 (0:00:00.477) 0:00:04.317 ****** 2025-09-08 00:57:29.751688 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751693 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751698 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751703 | orchestrator | 2025-09-08 00:57:29.751709 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-08 00:57:29.751714 | orchestrator | Monday 08 September 2025 00:55:21 +0000 (0:00:00.285) 0:00:04.602 ****** 2025-09-08 00:57:29.751719 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:29.751725 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:29.751730 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:29.751735 | orchestrator | 2025-09-08 00:57:29.751740 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-08 00:57:29.751745 | orchestrator | Monday 08 September 2025 00:55:22 +0000 (0:00:00.674) 0:00:05.277 ****** 2025-09-08 00:57:29.751751 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.751756 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.751761 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.751766 | orchestrator | 2025-09-08 00:57:29.751772 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-08 00:57:29.751777 | orchestrator | Monday 08 September 2025 00:55:22 +0000 (0:00:00.408) 0:00:05.686 ****** 2025-09-08 00:57:29.751782 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:29.751789 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:29.751794 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:29.751799 | orchestrator | 2025-09-08 00:57:29.751804 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-08 00:57:29.751810 | orchestrator | Monday 08 September 2025 00:55:24 +0000 (0:00:02.165) 0:00:07.852 ****** 2025-09-08 00:57:29.751815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:57:29.751821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:57:29.751826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:57:29.751831 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.751837 | orchestrator | 2025-09-08 00:57:29.751842 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-08 00:57:29.751854 | orchestrator | Monday 08 September 2025 00:55:25 +0000 (0:00:00.410) 0:00:08.263 ****** 2025-09-08 00:57:29.751862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.751877 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.751882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.751888 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.751893 | orchestrator | 2025-09-08 00:57:29.751898 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-08 00:57:29.751904 | orchestrator | Monday 08 September 2025 00:55:25 +0000 (0:00:00.781) 0:00:09.044 ****** 2025-09-08 00:57:29.751915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.751924 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.751930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.751935 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.751942 | orchestrator | 2025-09-08 00:57:29.751948 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-08 00:57:29.751955 | orchestrator | Monday 08 September 2025 00:55:26 +0000 (0:00:00.159) 0:00:09.203 ****** 2025-09-08 00:57:29.751964 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f2ec84c32919', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-08 00:55:23.312763', 'end': '2025-09-08 00:55:23.357931', 'delta': '0:00:00.045168', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2ec84c32919'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-08 00:57:29.751974 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ba79ec63a8d7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-08 00:55:24.073354', 'end': '2025-09-08 00:55:24.117750', 'delta': '0:00:00.044396', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ba79ec63a8d7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-08 00:57:29.751990 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c9755db2dc79', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-08 00:55:24.600697', 'end': '2025-09-08 00:55:24.643934', 'delta': '0:00:00.043237', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9755db2dc79'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-08 00:57:29.751997 | orchestrator | 2025-09-08 00:57:29.752003 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-08 00:57:29.752009 | orchestrator | Monday 08 September 2025 00:55:26 +0000 (0:00:00.435) 0:00:09.639 ****** 2025-09-08 00:57:29.752015 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.752022 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.752028 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.752034 | orchestrator | 2025-09-08 00:57:29.752041 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-08 00:57:29.752047 | orchestrator | Monday 08 September 2025 00:55:27 +0000 (0:00:00.447) 0:00:10.086 ****** 2025-09-08 00:57:29.752053 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-08 00:57:29.752059 | orchestrator | 2025-09-08 00:57:29.752066 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-08 00:57:29.752072 | orchestrator | Monday 08 September 2025 00:55:28 +0000 (0:00:01.693) 0:00:11.780 ****** 2025-09-08 00:57:29.752082 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752088 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752095 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752101 | orchestrator | 2025-09-08 00:57:29.752107 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-08 00:57:29.752113 | orchestrator | Monday 08 September 2025 00:55:29 +0000 (0:00:00.327) 0:00:12.107 ****** 2025-09-08 00:57:29.752119 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752125 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752132 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752138 | orchestrator | 2025-09-08 00:57:29.752144 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:57:29.752150 | orchestrator | Monday 08 September 2025 00:55:29 +0000 (0:00:00.398) 0:00:12.506 ****** 2025-09-08 00:57:29.752157 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752163 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752169 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752175 | orchestrator | 2025-09-08 00:57:29.752182 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-08 00:57:29.752188 | orchestrator | Monday 08 September 2025 00:55:29 +0000 (0:00:00.466) 0:00:12.972 ****** 2025-09-08 00:57:29.752194 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.752200 | orchestrator | 2025-09-08 00:57:29.752206 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-08 00:57:29.752213 | orchestrator | Monday 08 September 2025 00:55:30 +0000 (0:00:00.134) 0:00:13.107 ****** 2025-09-08 00:57:29.752219 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752225 | orchestrator | 2025-09-08 00:57:29.752231 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:57:29.752237 | orchestrator | Monday 08 September 2025 00:55:30 +0000 (0:00:00.235) 0:00:13.342 ****** 2025-09-08 00:57:29.752243 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752253 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752260 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752266 | orchestrator | 2025-09-08 00:57:29.752272 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-08 00:57:29.752279 | orchestrator | Monday 08 September 2025 00:55:30 +0000 (0:00:00.287) 0:00:13.630 ****** 2025-09-08 00:57:29.752285 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752291 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752298 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752304 | orchestrator | 2025-09-08 00:57:29.752311 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-08 00:57:29.752316 | orchestrator | Monday 08 September 2025 00:55:30 +0000 (0:00:00.353) 0:00:13.983 ****** 2025-09-08 00:57:29.752321 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752327 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752332 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752337 | orchestrator | 2025-09-08 00:57:29.752342 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-08 00:57:29.752348 | orchestrator | Monday 08 September 2025 00:55:31 +0000 (0:00:00.539) 0:00:14.523 ****** 2025-09-08 00:57:29.752353 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752636 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752652 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752657 | orchestrator | 2025-09-08 00:57:29.752663 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-08 00:57:29.752668 | orchestrator | Monday 08 September 2025 00:55:31 +0000 (0:00:00.307) 0:00:14.830 ****** 2025-09-08 00:57:29.752673 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752679 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752684 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752689 | orchestrator | 2025-09-08 00:57:29.752694 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-08 00:57:29.752700 | orchestrator | Monday 08 September 2025 00:55:32 +0000 (0:00:00.339) 0:00:15.170 ****** 2025-09-08 00:57:29.752705 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752710 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752715 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752721 | orchestrator | 2025-09-08 00:57:29.752726 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-08 00:57:29.752736 | orchestrator | Monday 08 September 2025 00:55:32 +0000 (0:00:00.319) 0:00:15.490 ****** 2025-09-08 00:57:29.752741 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.752747 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.752752 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.752757 | orchestrator | 2025-09-08 00:57:29.752762 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-08 00:57:29.752768 | orchestrator | Monday 08 September 2025 00:55:32 +0000 (0:00:00.497) 0:00:15.988 ****** 2025-09-08 00:57:29.752775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b', 'dm-uuid-LVM-Y0fi8lofW9mz3to22zm3kbYL7KhM63Y1xlLKU16Wd1xmsixYPceTgcWXPxL1aXLJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b', 'dm-uuid-LVM-xGVWByu1BSZhpyUwFR2O5UYeo7Gtrkxn0tf2jzAkUNYUZtr15YuY2x3DX2l8Q9zK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b', 'dm-uuid-LVM-VzUCABi2BuQurjhQCMyt68tIclROzO0ZBMrjgqoBkw7h7LcfDuM8CcFlGhUXcEr9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7', 'dm-uuid-LVM-wucYTeDbWI7QcvaQP4VymYqWox5BgGvEFFgtYXXIG7lpzKethcm4zCW52693sfjv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.752918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GcQWON-gSjm-sim7-whlw-7wIw-EiAC-zzmCr8', 'scsi-0QEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733', 'scsi-SQEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.752931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WWtSOz-FohK-vviu-kCAU-m6xa-xJ2T-jmK4pl', 'scsi-0QEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1', 'scsi-SQEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.752948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41', 'scsi-SQEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.752974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.752985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1', 'dm-uuid-LVM-eqsdTLwl2bClC02oRazwntyfs3menYK4OsSbFlRGX7fYoYxlReT90CLA97P8MMZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.752992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd', 'dm-uuid-LVM-bDkGLzHpLD658aJtO5kZxXyH8rTVEF09elbHrBEgYzpzpjqvnJgfUL1koASdL1iJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HHbReR-4ZVt-wANb-cQX5-t55v-9em9-DaJpxU', 'scsi-0QEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab', 'scsi-SQEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753017 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.753023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fXg5xk-Hlji-j4Cf-XVAJ-bXQn-YbEm-JQawdo', 'scsi-0QEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55', 'scsi-SQEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e', 'scsi-SQEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753068 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.753074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:29.753109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Rhjrl9-NLaA-oEAo-wptm-3z2r-6sbO-DYv4y8', 'scsi-0QEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989', 'scsi-SQEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k22s6L-vE8Z-OfUE-wH35-l99G-jzdB-FLSIKu', 'scsi-0QEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c', 'scsi-SQEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9', 'scsi-SQEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:29.753198 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.753204 | orchestrator | 2025-09-08 00:57:29.753209 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-08 00:57:29.753385 | orchestrator | Monday 08 September 2025 00:55:33 +0000 (0:00:00.657) 0:00:16.645 ****** 2025-09-08 00:57:29.753397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b', 'dm-uuid-LVM-Y0fi8lofW9mz3to22zm3kbYL7KhM63Y1xlLKU16Wd1xmsixYPceTgcWXPxL1aXLJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753408 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b', 'dm-uuid-LVM-xGVWByu1BSZhpyUwFR2O5UYeo7Gtrkxn0tf2jzAkUNYUZtr15YuY2x3DX2l8Q9zK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b', 'dm-uuid-LVM-VzUCABi2BuQurjhQCMyt68tIclROzO0ZBMrjgqoBkw7h7LcfDuM8CcFlGhUXcEr9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7', 'dm-uuid-LVM-wucYTeDbWI7QcvaQP4VymYqWox5BgGvEFFgtYXXIG7lpzKethcm4zCW52693sfjv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16', 'scsi-SQEMU_QEMU_HARDDISK_533618ca-ba76-4ec8-ae9d-a2b6607e0691-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6b18b724--0587--5812--9148--41071cea985b-osd--block--6b18b724--0587--5812--9148--41071cea985b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GcQWON-gSjm-sim7-whlw-7wIw-EiAC-zzmCr8', 'scsi-0QEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733', 'scsi-SQEMU_QEMU_HARDDISK_db00b734-b58e-4932-8acd-6a266572e733'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9b42feaf--b3bc--5f68--b3eb--37674b93132b-osd--block--9b42feaf--b3bc--5f68--b3eb--37674b93132b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WWtSOz-FohK-vviu-kCAU-m6xa-xJ2T-jmK4pl', 'scsi-0QEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1', 'scsi-SQEMU_QEMU_HARDDISK_8d0cadb8-6915-4fd2-b4e0-4946f7f23ce1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753603 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41', 'scsi-SQEMU_QEMU_HARDDISK_1f7dc1ee-c7b6-4bcc-8d38-7d9cabc41a41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753609 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753638 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.753646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1', 'dm-uuid-LVM-eqsdTLwl2bClC02oRazwntyfs3menYK4OsSbFlRGX7fYoYxlReT90CLA97P8MMZm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753664 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd', 'dm-uuid-LVM-bDkGLzHpLD658aJtO5kZxXyH8rTVEF09elbHrBEgYzpzpjqvnJgfUL1koASdL1iJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753676 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf5ec338-9407-4616-bedc-d4200aedf8a3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753716 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b-osd--block--ea3e0024--52d1--5c15--9011--f3e2d7c1d29b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HHbReR-4ZVt-wANb-cQX5-t55v-9em9-DaJpxU', 'scsi-0QEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab', 'scsi-SQEMU_QEMU_HARDDISK_4b92dc1e-8c5d-4e7b-ac22-fcae021763ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753726 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aa077d44--869a--533b--aa21--81dea0f926a7-osd--block--aa077d44--869a--533b--aa21--81dea0f926a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fXg5xk-Hlji-j4Cf-XVAJ-bXQn-YbEm-JQawdo', 'scsi-0QEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55', 'scsi-SQEMU_QEMU_HARDDISK_59c5476b-d42d-4c70-8df0-eefae278ca55'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753738 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e', 'scsi-SQEMU_QEMU_HARDDISK_d4ba40c0-17ae-4bff-a3cd-012c30b3474e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753751 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753757 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753762 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.753771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753777 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753793 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16', 'scsi-SQEMU_QEMU_HARDDISK_e92d3e52-3850-4577-a26e-7745eca46ff8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753807 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--df550631--cfd3--5799--aa47--c702e103b9e1-osd--block--df550631--cfd3--5799--aa47--c702e103b9e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Rhjrl9-NLaA-oEAo-wptm-3z2r-6sbO-DYv4y8', 'scsi-0QEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989', 'scsi-SQEMU_QEMU_HARDDISK_a654280a-a62d-423c-bf4b-ecfb391ad989'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eee7454c--3e15--5681--817b--16336d12a7fd-osd--block--eee7454c--3e15--5681--817b--16336d12a7fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k22s6L-vE8Z-OfUE-wH35-l99G-jzdB-FLSIKu', 'scsi-0QEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c', 'scsi-SQEMU_QEMU_HARDDISK_63bbd3aa-19f1-48b0-9249-561d852b638c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753824 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9', 'scsi-SQEMU_QEMU_HARDDISK_17ecbc41-9c45-4ac3-8b64-5422c11ec1e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:29.753842 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.753847 | orchestrator | 2025-09-08 00:57:29.753853 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-08 00:57:29.753858 | orchestrator | Monday 08 September 2025 00:55:34 +0000 (0:00:00.597) 0:00:17.242 ****** 2025-09-08 00:57:29.753864 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.753870 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.753875 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.753881 | orchestrator | 2025-09-08 00:57:29.753886 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-08 00:57:29.753892 | orchestrator | Monday 08 September 2025 00:55:34 +0000 (0:00:00.673) 0:00:17.915 ****** 2025-09-08 00:57:29.753897 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.753902 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.753908 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.753913 | orchestrator | 2025-09-08 00:57:29.753918 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:57:29.753924 | orchestrator | Monday 08 September 2025 00:55:35 +0000 (0:00:00.461) 0:00:18.377 ****** 2025-09-08 00:57:29.753929 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.753935 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.753940 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.753945 | orchestrator | 2025-09-08 00:57:29.753951 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:57:29.753956 | orchestrator | Monday 08 September 2025 00:55:35 +0000 (0:00:00.637) 0:00:19.015 ****** 2025-09-08 00:57:29.753965 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.753971 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.753976 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.753982 | orchestrator | 2025-09-08 00:57:29.753987 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:57:29.753992 | orchestrator | Monday 08 September 2025 00:55:36 +0000 (0:00:00.307) 0:00:19.322 ****** 2025-09-08 00:57:29.753998 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754003 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.754009 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.754014 | orchestrator | 2025-09-08 00:57:29.754060 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:57:29.754066 | orchestrator | Monday 08 September 2025 00:55:36 +0000 (0:00:00.427) 0:00:19.750 ****** 2025-09-08 00:57:29.754072 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754077 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.754082 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.754088 | orchestrator | 2025-09-08 00:57:29.754093 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-08 00:57:29.754098 | orchestrator | Monday 08 September 2025 00:55:37 +0000 (0:00:00.530) 0:00:20.280 ****** 2025-09-08 00:57:29.754104 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-08 00:57:29.754110 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-08 00:57:29.754115 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-08 00:57:29.754121 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-08 00:57:29.754127 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-08 00:57:29.754134 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-08 00:57:29.754140 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-08 00:57:29.754146 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-08 00:57:29.754152 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-08 00:57:29.754159 | orchestrator | 2025-09-08 00:57:29.754165 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-08 00:57:29.754171 | orchestrator | Monday 08 September 2025 00:55:38 +0000 (0:00:00.858) 0:00:21.139 ****** 2025-09-08 00:57:29.754177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:57:29.754183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:57:29.754190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:57:29.754196 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754202 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-08 00:57:29.754209 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-08 00:57:29.754215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-08 00:57:29.754221 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.754227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-08 00:57:29.754233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-08 00:57:29.754239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-08 00:57:29.754245 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.754251 | orchestrator | 2025-09-08 00:57:29.754258 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-08 00:57:29.754264 | orchestrator | Monday 08 September 2025 00:55:38 +0000 (0:00:00.363) 0:00:21.502 ****** 2025-09-08 00:57:29.754271 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:57:29.754277 | orchestrator | 2025-09-08 00:57:29.754284 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-08 00:57:29.754292 | orchestrator | Monday 08 September 2025 00:55:39 +0000 (0:00:00.722) 0:00:22.225 ****** 2025-09-08 00:57:29.754299 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754305 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.754311 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.754318 | orchestrator | 2025-09-08 00:57:29.754328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-08 00:57:29.754334 | orchestrator | Monday 08 September 2025 00:55:39 +0000 (0:00:00.319) 0:00:22.545 ****** 2025-09-08 00:57:29.754341 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754347 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.754353 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.754364 | orchestrator | 2025-09-08 00:57:29.754370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-08 00:57:29.754376 | orchestrator | Monday 08 September 2025 00:55:39 +0000 (0:00:00.312) 0:00:22.857 ****** 2025-09-08 00:57:29.754382 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754389 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.754395 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:29.754401 | orchestrator | 2025-09-08 00:57:29.754407 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-08 00:57:29.754413 | orchestrator | Monday 08 September 2025 00:55:40 +0000 (0:00:00.332) 0:00:23.189 ****** 2025-09-08 00:57:29.754432 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.754439 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.754445 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.754451 | orchestrator | 2025-09-08 00:57:29.754457 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-08 00:57:29.754464 | orchestrator | Monday 08 September 2025 00:55:40 +0000 (0:00:00.625) 0:00:23.815 ****** 2025-09-08 00:57:29.754470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:57:29.754477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:57:29.754482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:57:29.754487 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754493 | orchestrator | 2025-09-08 00:57:29.754498 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-08 00:57:29.754507 | orchestrator | Monday 08 September 2025 00:55:41 +0000 (0:00:00.442) 0:00:24.257 ****** 2025-09-08 00:57:29.754513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:57:29.754518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:57:29.754524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:57:29.754529 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754534 | orchestrator | 2025-09-08 00:57:29.754540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-08 00:57:29.754545 | orchestrator | Monday 08 September 2025 00:55:41 +0000 (0:00:00.380) 0:00:24.637 ****** 2025-09-08 00:57:29.754550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:57:29.754556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:57:29.754561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:57:29.754567 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754572 | orchestrator | 2025-09-08 00:57:29.754577 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-08 00:57:29.754583 | orchestrator | Monday 08 September 2025 00:55:41 +0000 (0:00:00.358) 0:00:24.996 ****** 2025-09-08 00:57:29.754588 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:29.754593 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:29.754607 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:29.754613 | orchestrator | 2025-09-08 00:57:29.754618 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-08 00:57:29.754624 | orchestrator | Monday 08 September 2025 00:55:42 +0000 (0:00:00.311) 0:00:25.308 ****** 2025-09-08 00:57:29.754629 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:57:29.754642 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-08 00:57:29.754647 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-08 00:57:29.754652 | orchestrator | 2025-09-08 00:57:29.754658 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-08 00:57:29.754663 | orchestrator | Monday 08 September 2025 00:55:42 +0000 (0:00:00.553) 0:00:25.861 ****** 2025-09-08 00:57:29.754668 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:29.754674 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:29.754679 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:29.754691 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-08 00:57:29.754697 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:57:29.754702 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:57:29.754707 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:57:29.754712 | orchestrator | 2025-09-08 00:57:29.754718 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-08 00:57:29.754723 | orchestrator | Monday 08 September 2025 00:55:43 +0000 (0:00:01.016) 0:00:26.878 ****** 2025-09-08 00:57:29.754728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:29.754734 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:29.754739 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:29.754744 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-08 00:57:29.754750 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:57:29.754755 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:57:29.754760 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:57:29.754766 | orchestrator | 2025-09-08 00:57:29.754774 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-08 00:57:29.754779 | orchestrator | Monday 08 September 2025 00:55:45 +0000 (0:00:01.951) 0:00:28.830 ****** 2025-09-08 00:57:29.754785 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:29.754790 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:29.754796 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-08 00:57:29.754801 | orchestrator | 2025-09-08 00:57:29.754806 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-08 00:57:29.754812 | orchestrator | Monday 08 September 2025 00:55:46 +0000 (0:00:00.376) 0:00:29.207 ****** 2025-09-08 00:57:29.754818 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:29.754823 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:29.754833 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:29.754839 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:29.754844 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:29.754850 | orchestrator | 2025-09-08 00:57:29.754855 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-08 00:57:29.754866 | orchestrator | Monday 08 September 2025 00:56:32 +0000 (0:00:46.122) 0:01:15.329 ****** 2025-09-08 00:57:29.754871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754877 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754882 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754887 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754893 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754898 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754903 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-08 00:57:29.754909 | orchestrator | 2025-09-08 00:57:29.754914 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-08 00:57:29.754919 | orchestrator | Monday 08 September 2025 00:56:56 +0000 (0:00:24.306) 0:01:39.635 ****** 2025-09-08 00:57:29.754925 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754930 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754935 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754941 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754946 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754951 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754957 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:57:29.754962 | orchestrator | 2025-09-08 00:57:29.754967 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-08 00:57:29.754973 | orchestrator | Monday 08 September 2025 00:57:09 +0000 (0:00:12.682) 0:01:52.318 ****** 2025-09-08 00:57:29.754978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.754983 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:29.754989 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:29.754994 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.755000 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:29.755005 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:29.755013 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.755019 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:29.755024 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:29.755029 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.755035 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:29.755040 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:29.755045 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.755051 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:29.755056 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:29.755061 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:29.755067 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:29.755078 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:29.755083 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-08 00:57:29.755089 | orchestrator | 2025-09-08 00:57:29.755094 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:57:29.755099 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-08 00:57:29.755109 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-08 00:57:29.755115 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-08 00:57:29.755120 | orchestrator | 2025-09-08 00:57:29.755126 | orchestrator | 2025-09-08 00:57:29.755131 | orchestrator | 2025-09-08 00:57:29.755136 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:57:29.755142 | orchestrator | Monday 08 September 2025 00:57:27 +0000 (0:00:18.292) 0:02:10.610 ****** 2025-09-08 00:57:29.755147 | orchestrator | =============================================================================== 2025-09-08 00:57:29.755152 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.12s 2025-09-08 00:57:29.755158 | orchestrator | generate keys ---------------------------------------------------------- 24.31s 2025-09-08 00:57:29.755163 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.29s 2025-09-08 00:57:29.755168 | orchestrator | get keys from monitors ------------------------------------------------- 12.68s 2025-09-08 00:57:29.755174 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.17s 2025-09-08 00:57:29.755179 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.95s 2025-09-08 00:57:29.755184 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2025-09-08 00:57:29.755190 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2025-09-08 00:57:29.755195 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2025-09-08 00:57:29.755200 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.79s 2025-09-08 00:57:29.755206 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2025-09-08 00:57:29.755211 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2025-09-08 00:57:29.755216 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-09-08 00:57:29.755222 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2025-09-08 00:57:29.755227 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.66s 2025-09-08 00:57:29.755232 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-09-08 00:57:29.755238 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-09-08 00:57:29.755243 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2025-09-08 00:57:29.755248 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2025-09-08 00:57:29.755254 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2025-09-08 00:57:29.755259 | orchestrator | 2025-09-08 00:57:29 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:29.755265 | orchestrator | 2025-09-08 00:57:29 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:29.755270 | orchestrator | 2025-09-08 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:32.809906 | orchestrator | 2025-09-08 00:57:32 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:32.811128 | orchestrator | 2025-09-08 00:57:32 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:32.812593 | orchestrator | 2025-09-08 00:57:32 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:32.812708 | orchestrator | 2025-09-08 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:35.863258 | orchestrator | 2025-09-08 00:57:35 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:35.865010 | orchestrator | 2025-09-08 00:57:35 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:35.869119 | orchestrator | 2025-09-08 00:57:35 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:35.869147 | orchestrator | 2025-09-08 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:38.910299 | orchestrator | 2025-09-08 00:57:38 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:38.913368 | orchestrator | 2025-09-08 00:57:38 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:38.916513 | orchestrator | 2025-09-08 00:57:38 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:38.916564 | orchestrator | 2025-09-08 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:41.971198 | orchestrator | 2025-09-08 00:57:41 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:41.972411 | orchestrator | 2025-09-08 00:57:41 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:41.976383 | orchestrator | 2025-09-08 00:57:41 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:41.976745 | orchestrator | 2025-09-08 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:45.020312 | orchestrator | 2025-09-08 00:57:45 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:45.022882 | orchestrator | 2025-09-08 00:57:45 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:45.026341 | orchestrator | 2025-09-08 00:57:45 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:45.026539 | orchestrator | 2025-09-08 00:57:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:48.089653 | orchestrator | 2025-09-08 00:57:48 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:48.090329 | orchestrator | 2025-09-08 00:57:48 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:48.093470 | orchestrator | 2025-09-08 00:57:48 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:48.094239 | orchestrator | 2025-09-08 00:57:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:51.145970 | orchestrator | 2025-09-08 00:57:51 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:51.148107 | orchestrator | 2025-09-08 00:57:51 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:51.148760 | orchestrator | 2025-09-08 00:57:51 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:51.148785 | orchestrator | 2025-09-08 00:57:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:54.199692 | orchestrator | 2025-09-08 00:57:54 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:54.201297 | orchestrator | 2025-09-08 00:57:54 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:54.203232 | orchestrator | 2025-09-08 00:57:54 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:54.203696 | orchestrator | 2025-09-08 00:57:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:57.251847 | orchestrator | 2025-09-08 00:57:57 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:57:57.254881 | orchestrator | 2025-09-08 00:57:57 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:57:57.256111 | orchestrator | 2025-09-08 00:57:57 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state STARTED 2025-09-08 00:57:57.256536 | orchestrator | 2025-09-08 00:57:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:00.296070 | orchestrator | 2025-09-08 00:58:00 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:00.298639 | orchestrator | 2025-09-08 00:58:00 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:00.300174 | orchestrator | 2025-09-08 00:58:00 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:58:00.301267 | orchestrator | 2025-09-08 00:58:00 | INFO  | Task 5174788b-5be7-4a09-b4ca-b7b7eb55d0bd is in state SUCCESS 2025-09-08 00:58:00.301293 | orchestrator | 2025-09-08 00:58:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:03.355138 | orchestrator | 2025-09-08 00:58:03 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:03.355967 | orchestrator | 2025-09-08 00:58:03 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:03.357527 | orchestrator | 2025-09-08 00:58:03 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:58:03.357550 | orchestrator | 2025-09-08 00:58:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:06.407476 | orchestrator | 2025-09-08 00:58:06 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:06.409703 | orchestrator | 2025-09-08 00:58:06 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:06.412353 | orchestrator | 2025-09-08 00:58:06 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:58:06.412565 | orchestrator | 2025-09-08 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:09.477269 | orchestrator | 2025-09-08 00:58:09 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:09.483691 | orchestrator | 2025-09-08 00:58:09 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:09.488842 | orchestrator | 2025-09-08 00:58:09 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state STARTED 2025-09-08 00:58:09.489930 | orchestrator | 2025-09-08 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:12.538709 | orchestrator | 2025-09-08 00:58:12 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:12.540018 | orchestrator | 2025-09-08 00:58:12 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:12.542838 | orchestrator | 2025-09-08 00:58:12 | INFO  | Task 78ab58be-98cc-4dc2-9243-f4abab87601b is in state SUCCESS 2025-09-08 00:58:12.545554 | orchestrator | 2025-09-08 00:58:12.545601 | orchestrator | 2025-09-08 00:58:12.545614 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-08 00:58:12.545626 | orchestrator | 2025-09-08 00:58:12.545637 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-08 00:58:12.545650 | orchestrator | Monday 08 September 2025 00:57:32 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-09-08 00:58:12.545689 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-08 00:58:12.545702 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.545713 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.545724 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 00:58:12.545735 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.545746 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-08 00:58:12.545756 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-08 00:58:12.545767 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-08 00:58:12.545777 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-08 00:58:12.545788 | orchestrator | 2025-09-08 00:58:12.545799 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-08 00:58:12.545809 | orchestrator | Monday 08 September 2025 00:57:36 +0000 (0:00:03.906) 0:00:04.078 ****** 2025-09-08 00:58:12.545821 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:58:12.545832 | orchestrator | 2025-09-08 00:58:12.545843 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-08 00:58:12.545854 | orchestrator | Monday 08 September 2025 00:57:37 +0000 (0:00:00.989) 0:00:05.068 ****** 2025-09-08 00:58:12.545864 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-08 00:58:12.545875 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.545886 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.545896 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 00:58:12.545907 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.545917 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-08 00:58:12.545928 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-08 00:58:12.545939 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-08 00:58:12.545949 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-08 00:58:12.545959 | orchestrator | 2025-09-08 00:58:12.545970 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-08 00:58:12.545981 | orchestrator | Monday 08 September 2025 00:57:50 +0000 (0:00:13.548) 0:00:18.617 ****** 2025-09-08 00:58:12.545992 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-08 00:58:12.546003 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.546013 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.546074 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 00:58:12.546085 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:12.546096 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-08 00:58:12.546106 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-08 00:58:12.546117 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-08 00:58:12.546128 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-08 00:58:12.546148 | orchestrator | 2025-09-08 00:58:12.546159 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:58:12.546187 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:58:12.546202 | orchestrator | 2025-09-08 00:58:12.546214 | orchestrator | 2025-09-08 00:58:12.546227 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:58:12.546239 | orchestrator | Monday 08 September 2025 00:57:57 +0000 (0:00:06.783) 0:00:25.400 ****** 2025-09-08 00:58:12.546251 | orchestrator | =============================================================================== 2025-09-08 00:58:12.546264 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.55s 2025-09-08 00:58:12.546277 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.78s 2025-09-08 00:58:12.546289 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.91s 2025-09-08 00:58:12.546302 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2025-09-08 00:58:12.546314 | orchestrator | 2025-09-08 00:58:12.546326 | orchestrator | 2025-09-08 00:58:12.546339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:58:12.546351 | orchestrator | 2025-09-08 00:58:12.546377 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:58:12.546389 | orchestrator | Monday 08 September 2025 00:56:21 +0000 (0:00:00.268) 0:00:00.268 ****** 2025-09-08 00:58:12.546402 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.546435 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.546448 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.546461 | orchestrator | 2025-09-08 00:58:12.546473 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:58:12.546486 | orchestrator | Monday 08 September 2025 00:56:22 +0000 (0:00:00.303) 0:00:00.572 ****** 2025-09-08 00:58:12.546498 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-08 00:58:12.546511 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-08 00:58:12.546523 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-08 00:58:12.546534 | orchestrator | 2025-09-08 00:58:12.546545 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-08 00:58:12.546555 | orchestrator | 2025-09-08 00:58:12.546566 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:12.546577 | orchestrator | Monday 08 September 2025 00:56:22 +0000 (0:00:00.443) 0:00:01.015 ****** 2025-09-08 00:58:12.546588 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:58:12.546598 | orchestrator | 2025-09-08 00:58:12.546609 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-08 00:58:12.546620 | orchestrator | Monday 08 September 2025 00:56:23 +0000 (0:00:00.542) 0:00:01.558 ****** 2025-09-08 00:58:12.546636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.546682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.546703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.546722 | orchestrator | 2025-09-08 00:58:12.546733 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-08 00:58:12.546744 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:01.087) 0:00:02.645 ****** 2025-09-08 00:58:12.546755 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.546766 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.546777 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.546788 | orchestrator | 2025-09-08 00:58:12.546799 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:12.546810 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:00.451) 0:00:03.097 ****** 2025-09-08 00:58:12.546821 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-08 00:58:12.546845 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-08 00:58:12.546861 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-08 00:58:12.546872 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-08 00:58:12.546883 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-08 00:58:12.546894 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-08 00:58:12.546905 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-08 00:58:12.546915 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-08 00:58:12.546926 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-08 00:58:12.546937 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-08 00:58:12.546948 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-08 00:58:12.546958 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-08 00:58:12.546969 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-08 00:58:12.546980 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-08 00:58:12.546990 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-08 00:58:12.547001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-08 00:58:12.547012 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-08 00:58:12.547023 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-08 00:58:12.547041 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-08 00:58:12.547051 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-08 00:58:12.547062 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-08 00:58:12.547073 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-08 00:58:12.547083 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-08 00:58:12.547094 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-08 00:58:12.547106 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-08 00:58:12.547119 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-08 00:58:12.547130 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-08 00:58:12.547141 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-08 00:58:12.547152 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-08 00:58:12.547162 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-08 00:58:12.547173 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-08 00:58:12.547184 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-08 00:58:12.547194 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-08 00:58:12.547210 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-08 00:58:12.547222 | orchestrator | 2025-09-08 00:58:12.547233 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.547243 | orchestrator | Monday 08 September 2025 00:56:25 +0000 (0:00:00.795) 0:00:03.892 ****** 2025-09-08 00:58:12.547254 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.547265 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.547276 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.547287 | orchestrator | 2025-09-08 00:58:12.547298 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.547308 | orchestrator | Monday 08 September 2025 00:56:25 +0000 (0:00:00.294) 0:00:04.186 ****** 2025-09-08 00:58:12.547319 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547330 | orchestrator | 2025-09-08 00:58:12.547341 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.547356 | orchestrator | Monday 08 September 2025 00:56:25 +0000 (0:00:00.144) 0:00:04.331 ****** 2025-09-08 00:58:12.547367 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547378 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.547389 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.547399 | orchestrator | 2025-09-08 00:58:12.547430 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.547441 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.491) 0:00:04.822 ****** 2025-09-08 00:58:12.547459 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.547470 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.547481 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.547492 | orchestrator | 2025-09-08 00:58:12.547503 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.547513 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.318) 0:00:05.141 ****** 2025-09-08 00:58:12.547524 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547535 | orchestrator | 2025-09-08 00:58:12.547545 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.547556 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.143) 0:00:05.284 ****** 2025-09-08 00:58:12.547567 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547578 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.547588 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.547599 | orchestrator | 2025-09-08 00:58:12.547610 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.547621 | orchestrator | Monday 08 September 2025 00:56:27 +0000 (0:00:00.290) 0:00:05.574 ****** 2025-09-08 00:58:12.547631 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.547642 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.547653 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.547663 | orchestrator | 2025-09-08 00:58:12.547674 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.547685 | orchestrator | Monday 08 September 2025 00:56:27 +0000 (0:00:00.308) 0:00:05.883 ****** 2025-09-08 00:58:12.547695 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547706 | orchestrator | 2025-09-08 00:58:12.547717 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.547727 | orchestrator | Monday 08 September 2025 00:56:27 +0000 (0:00:00.138) 0:00:06.022 ****** 2025-09-08 00:58:12.547738 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547749 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.547759 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.547770 | orchestrator | 2025-09-08 00:58:12.547781 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.547792 | orchestrator | Monday 08 September 2025 00:56:27 +0000 (0:00:00.507) 0:00:06.530 ****** 2025-09-08 00:58:12.547803 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.547813 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.547824 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.547835 | orchestrator | 2025-09-08 00:58:12.547846 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.547857 | orchestrator | Monday 08 September 2025 00:56:28 +0000 (0:00:00.328) 0:00:06.859 ****** 2025-09-08 00:58:12.547867 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547878 | orchestrator | 2025-09-08 00:58:12.547889 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.547899 | orchestrator | Monday 08 September 2025 00:56:28 +0000 (0:00:00.119) 0:00:06.978 ****** 2025-09-08 00:58:12.547910 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.547921 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.547931 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.547942 | orchestrator | 2025-09-08 00:58:12.547953 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.547964 | orchestrator | Monday 08 September 2025 00:56:28 +0000 (0:00:00.286) 0:00:07.264 ****** 2025-09-08 00:58:12.547974 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.547985 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.547996 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.548006 | orchestrator | 2025-09-08 00:58:12.548017 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.548028 | orchestrator | Monday 08 September 2025 00:56:29 +0000 (0:00:00.524) 0:00:07.789 ****** 2025-09-08 00:58:12.548045 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548056 | orchestrator | 2025-09-08 00:58:12.548066 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.548077 | orchestrator | Monday 08 September 2025 00:56:29 +0000 (0:00:00.121) 0:00:07.911 ****** 2025-09-08 00:58:12.548088 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548098 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.548109 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.548119 | orchestrator | 2025-09-08 00:58:12.548130 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.548141 | orchestrator | Monday 08 September 2025 00:56:29 +0000 (0:00:00.326) 0:00:08.238 ****** 2025-09-08 00:58:12.548151 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.548162 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.548173 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.548184 | orchestrator | 2025-09-08 00:58:12.548199 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.548210 | orchestrator | Monday 08 September 2025 00:56:30 +0000 (0:00:00.306) 0:00:08.545 ****** 2025-09-08 00:58:12.548221 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548231 | orchestrator | 2025-09-08 00:58:12.548242 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.548253 | orchestrator | Monday 08 September 2025 00:56:30 +0000 (0:00:00.125) 0:00:08.670 ****** 2025-09-08 00:58:12.548263 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548274 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.548285 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.548296 | orchestrator | 2025-09-08 00:58:12.548306 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.548317 | orchestrator | Monday 08 September 2025 00:56:30 +0000 (0:00:00.345) 0:00:09.015 ****** 2025-09-08 00:58:12.548328 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.548339 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.548349 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.548360 | orchestrator | 2025-09-08 00:58:12.548377 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.548388 | orchestrator | Monday 08 September 2025 00:56:31 +0000 (0:00:00.627) 0:00:09.643 ****** 2025-09-08 00:58:12.548399 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548454 | orchestrator | 2025-09-08 00:58:12.548467 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.548478 | orchestrator | Monday 08 September 2025 00:56:31 +0000 (0:00:00.137) 0:00:09.781 ****** 2025-09-08 00:58:12.548488 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548499 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.548510 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.548520 | orchestrator | 2025-09-08 00:58:12.548531 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.548542 | orchestrator | Monday 08 September 2025 00:56:31 +0000 (0:00:00.282) 0:00:10.063 ****** 2025-09-08 00:58:12.548553 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.548564 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.548574 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.548585 | orchestrator | 2025-09-08 00:58:12.548596 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.548607 | orchestrator | Monday 08 September 2025 00:56:31 +0000 (0:00:00.330) 0:00:10.393 ****** 2025-09-08 00:58:12.548617 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548628 | orchestrator | 2025-09-08 00:58:12.548639 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.548650 | orchestrator | Monday 08 September 2025 00:56:31 +0000 (0:00:00.116) 0:00:10.510 ****** 2025-09-08 00:58:12.548661 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548671 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.548682 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.548700 | orchestrator | 2025-09-08 00:58:12.548711 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.548722 | orchestrator | Monday 08 September 2025 00:56:32 +0000 (0:00:00.310) 0:00:10.820 ****** 2025-09-08 00:58:12.548733 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.548744 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.548755 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.548765 | orchestrator | 2025-09-08 00:58:12.548776 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.548787 | orchestrator | Monday 08 September 2025 00:56:33 +0000 (0:00:00.726) 0:00:11.547 ****** 2025-09-08 00:58:12.548798 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548808 | orchestrator | 2025-09-08 00:58:12.548819 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.548830 | orchestrator | Monday 08 September 2025 00:56:33 +0000 (0:00:00.159) 0:00:11.706 ****** 2025-09-08 00:58:12.548841 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548852 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.548862 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.548873 | orchestrator | 2025-09-08 00:58:12.548884 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:12.548895 | orchestrator | Monday 08 September 2025 00:56:33 +0000 (0:00:00.365) 0:00:12.071 ****** 2025-09-08 00:58:12.548905 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:12.548916 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:12.548927 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:12.548938 | orchestrator | 2025-09-08 00:58:12.548948 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:12.548959 | orchestrator | Monday 08 September 2025 00:56:33 +0000 (0:00:00.315) 0:00:12.387 ****** 2025-09-08 00:58:12.548970 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.548981 | orchestrator | 2025-09-08 00:58:12.548992 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:12.549002 | orchestrator | Monday 08 September 2025 00:56:33 +0000 (0:00:00.146) 0:00:12.534 ****** 2025-09-08 00:58:12.549013 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.549024 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.549035 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.549045 | orchestrator | 2025-09-08 00:58:12.549056 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-08 00:58:12.549067 | orchestrator | Monday 08 September 2025 00:56:34 +0000 (0:00:00.462) 0:00:12.996 ****** 2025-09-08 00:58:12.549078 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:12.549088 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:58:12.549099 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:58:12.549110 | orchestrator | 2025-09-08 00:58:12.549121 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-08 00:58:12.549131 | orchestrator | Monday 08 September 2025 00:56:36 +0000 (0:00:01.661) 0:00:14.658 ****** 2025-09-08 00:58:12.549142 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-08 00:58:12.549153 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-08 00:58:12.549175 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-08 00:58:12.549186 | orchestrator | 2025-09-08 00:58:12.549197 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-08 00:58:12.549207 | orchestrator | Monday 08 September 2025 00:56:38 +0000 (0:00:01.932) 0:00:16.590 ****** 2025-09-08 00:58:12.549218 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-08 00:58:12.549229 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-08 00:58:12.549240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-08 00:58:12.549257 | orchestrator | 2025-09-08 00:58:12.549268 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-08 00:58:12.549278 | orchestrator | Monday 08 September 2025 00:56:40 +0000 (0:00:02.161) 0:00:18.752 ****** 2025-09-08 00:58:12.549295 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-08 00:58:12.549306 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-08 00:58:12.549317 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-08 00:58:12.549328 | orchestrator | 2025-09-08 00:58:12.549339 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-08 00:58:12.549350 | orchestrator | Monday 08 September 2025 00:56:42 +0000 (0:00:02.043) 0:00:20.795 ****** 2025-09-08 00:58:12.549360 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.549371 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.549382 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.549392 | orchestrator | 2025-09-08 00:58:12.549403 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-08 00:58:12.549463 | orchestrator | Monday 08 September 2025 00:56:42 +0000 (0:00:00.293) 0:00:21.089 ****** 2025-09-08 00:58:12.549474 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.549485 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.549496 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.549507 | orchestrator | 2025-09-08 00:58:12.549516 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:12.549526 | orchestrator | Monday 08 September 2025 00:56:42 +0000 (0:00:00.311) 0:00:21.401 ****** 2025-09-08 00:58:12.549536 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:58:12.549545 | orchestrator | 2025-09-08 00:58:12.549555 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-08 00:58:12.549564 | orchestrator | Monday 08 September 2025 00:56:43 +0000 (0:00:00.578) 0:00:21.979 ****** 2025-09-08 00:58:12.549581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.549607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.549625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.549641 | orchestrator | 2025-09-08 00:58:12.549651 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-08 00:58:12.549661 | orchestrator | Monday 08 September 2025 00:56:45 +0000 (0:00:01.948) 0:00:23.928 ****** 2025-09-08 00:58:12.549680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:12.549691 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.549707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:12.549728 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.549738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:12.549749 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.549759 | orchestrator | 2025-09-08 00:58:12.549769 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-08 00:58:12.549778 | orchestrator | Monday 08 September 2025 00:56:46 +0000 (0:00:00.633) 0:00:24.561 ****** 2025-09-08 00:58:12.549801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:12.549818 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.549828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:12.549844 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.549866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:12.549877 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.549887 | orchestrator | 2025-09-08 00:58:12.549896 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-08 00:58:12.549906 | orchestrator | Monday 08 September 2025 00:56:46 +0000 (0:00:00.892) 0:00:25.454 ****** 2025-09-08 00:58:12.549916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.549946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.549958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:12.549974 | orchestrator | 2025-09-08 00:58:12.549983 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:12.550000 | orchestrator | Monday 08 September 2025 00:56:48 +0000 (0:00:01.420) 0:00:26.874 ****** 2025-09-08 00:58:12.550010 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:12.550049 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:12.550059 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:12.550069 | orchestrator | 2025-09-08 00:58:12.550078 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:12.550088 | orchestrator | Monday 08 September 2025 00:56:48 +0000 (0:00:00.299) 0:00:27.173 ****** 2025-09-08 00:58:12.550098 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:58:12.550107 | orchestrator | 2025-09-08 00:58:12.550117 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-08 00:58:12.550127 | orchestrator | Monday 08 September 2025 00:56:49 +0000 (0:00:00.507) 0:00:27.681 ****** 2025-09-08 00:58:12.550136 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:12.550146 | orchestrator | 2025-09-08 00:58:12.550161 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-08 00:58:12.550171 | orchestrator | Monday 08 September 2025 00:56:51 +0000 (0:00:02.459) 0:00:30.141 ****** 2025-09-08 00:58:12.550180 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:12.550190 | orchestrator | 2025-09-08 00:58:12.550200 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-08 00:58:12.550209 | orchestrator | Monday 08 September 2025 00:56:54 +0000 (0:00:02.586) 0:00:32.728 ****** 2025-09-08 00:58:12.550219 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:12.550228 | orchestrator | 2025-09-08 00:58:12.550238 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-08 00:58:12.550248 | orchestrator | Monday 08 September 2025 00:57:10 +0000 (0:00:16.435) 0:00:49.164 ****** 2025-09-08 00:58:12.550257 | orchestrator | 2025-09-08 00:58:12.550267 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-08 00:58:12.550276 | orchestrator | Monday 08 September 2025 00:57:10 +0000 (0:00:00.066) 0:00:49.230 ****** 2025-09-08 00:58:12.550286 | orchestrator | 2025-09-08 00:58:12.550295 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-08 00:58:12.550305 | orchestrator | Monday 08 September 2025 00:57:10 +0000 (0:00:00.065) 0:00:49.295 ****** 2025-09-08 00:58:12.550315 | orchestrator | 2025-09-08 00:58:12.550324 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-08 00:58:12.550334 | orchestrator | Monday 08 September 2025 00:57:10 +0000 (0:00:00.066) 0:00:49.362 ****** 2025-09-08 00:58:12.550343 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:12.550353 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:58:12.550362 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:58:12.550372 | orchestrator | 2025-09-08 00:58:12.550381 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:58:12.550398 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-08 00:58:12.550424 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-08 00:58:12.550434 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-08 00:58:12.550444 | orchestrator | 2025-09-08 00:58:12.550453 | orchestrator | 2025-09-08 00:58:12.550463 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:58:12.550473 | orchestrator | Monday 08 September 2025 00:58:09 +0000 (0:00:59.024) 0:01:48.387 ****** 2025-09-08 00:58:12.550482 | orchestrator | =============================================================================== 2025-09-08 00:58:12.550492 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.02s 2025-09-08 00:58:12.550501 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.44s 2025-09-08 00:58:12.550511 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.59s 2025-09-08 00:58:12.550521 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.46s 2025-09-08 00:58:12.550530 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.16s 2025-09-08 00:58:12.550540 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.04s 2025-09-08 00:58:12.550549 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.95s 2025-09-08 00:58:12.550559 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.93s 2025-09-08 00:58:12.550569 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.66s 2025-09-08 00:58:12.550578 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.42s 2025-09-08 00:58:12.550588 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2025-09-08 00:58:12.550597 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.89s 2025-09-08 00:58:12.550607 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-09-08 00:58:12.550616 | orchestrator | horizon : Update policy file name --------------------------------------- 0.73s 2025-09-08 00:58:12.550626 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2025-09-08 00:58:12.550636 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2025-09-08 00:58:12.550645 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2025-09-08 00:58:12.550655 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-09-08 00:58:12.550669 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-09-08 00:58:12.550679 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-09-08 00:58:12.550689 | orchestrator | 2025-09-08 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:15.590540 | orchestrator | 2025-09-08 00:58:15 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:15.592896 | orchestrator | 2025-09-08 00:58:15 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:15.593326 | orchestrator | 2025-09-08 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:18.636724 | orchestrator | 2025-09-08 00:58:18 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:18.638865 | orchestrator | 2025-09-08 00:58:18 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:18.638895 | orchestrator | 2025-09-08 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:21.691819 | orchestrator | 2025-09-08 00:58:21 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:21.692113 | orchestrator | 2025-09-08 00:58:21 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:21.692137 | orchestrator | 2025-09-08 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:24.736540 | orchestrator | 2025-09-08 00:58:24 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:24.738139 | orchestrator | 2025-09-08 00:58:24 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:24.738173 | orchestrator | 2025-09-08 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:27.783154 | orchestrator | 2025-09-08 00:58:27 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:27.784135 | orchestrator | 2025-09-08 00:58:27 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:27.784166 | orchestrator | 2025-09-08 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:30.827670 | orchestrator | 2025-09-08 00:58:30 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:30.829524 | orchestrator | 2025-09-08 00:58:30 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:30.829554 | orchestrator | 2025-09-08 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:33.871808 | orchestrator | 2025-09-08 00:58:33 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:33.874480 | orchestrator | 2025-09-08 00:58:33 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:33.874659 | orchestrator | 2025-09-08 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:36.921291 | orchestrator | 2025-09-08 00:58:36 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:36.923507 | orchestrator | 2025-09-08 00:58:36 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:36.924060 | orchestrator | 2025-09-08 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:39.966145 | orchestrator | 2025-09-08 00:58:39 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:39.967656 | orchestrator | 2025-09-08 00:58:39 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:39.967685 | orchestrator | 2025-09-08 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:43.012546 | orchestrator | 2025-09-08 00:58:43 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:43.015066 | orchestrator | 2025-09-08 00:58:43 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:43.015103 | orchestrator | 2025-09-08 00:58:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:46.051269 | orchestrator | 2025-09-08 00:58:46 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:46.051392 | orchestrator | 2025-09-08 00:58:46 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:46.051460 | orchestrator | 2025-09-08 00:58:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:49.103794 | orchestrator | 2025-09-08 00:58:49 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:49.104233 | orchestrator | 2025-09-08 00:58:49 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:49.104300 | orchestrator | 2025-09-08 00:58:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:52.148306 | orchestrator | 2025-09-08 00:58:52 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:52.149791 | orchestrator | 2025-09-08 00:58:52 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state STARTED 2025-09-08 00:58:52.149822 | orchestrator | 2025-09-08 00:58:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:55.204147 | orchestrator | 2025-09-08 00:58:55 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:55.207789 | orchestrator | 2025-09-08 00:58:55 | INFO  | Task f505dbee-f66d-42fb-a190-4c0e16512259 is in state SUCCESS 2025-09-08 00:58:55.208459 | orchestrator | 2025-09-08 00:58:55 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:58:55.210922 | orchestrator | 2025-09-08 00:58:55 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:58:55.213551 | orchestrator | 2025-09-08 00:58:55 | INFO  | Task 0c80692f-57b6-43b7-a90f-6bbae938e666 is in state STARTED 2025-09-08 00:58:55.213652 | orchestrator | 2025-09-08 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:58.259211 | orchestrator | 2025-09-08 00:58:58 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:58:58.259746 | orchestrator | 2025-09-08 00:58:58 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:58:58.261939 | orchestrator | 2025-09-08 00:58:58 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:58:58.263659 | orchestrator | 2025-09-08 00:58:58 | INFO  | Task 0c80692f-57b6-43b7-a90f-6bbae938e666 is in state STARTED 2025-09-08 00:58:58.263685 | orchestrator | 2025-09-08 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:01.316381 | orchestrator | 2025-09-08 00:59:01 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:59:01.316556 | orchestrator | 2025-09-08 00:59:01 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:01.316570 | orchestrator | 2025-09-08 00:59:01 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:01.316582 | orchestrator | 2025-09-08 00:59:01 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:01.317035 | orchestrator | 2025-09-08 00:59:01 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:01.317888 | orchestrator | 2025-09-08 00:59:01 | INFO  | Task 0c80692f-57b6-43b7-a90f-6bbae938e666 is in state SUCCESS 2025-09-08 00:59:01.317915 | orchestrator | 2025-09-08 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:04.620155 | orchestrator | 2025-09-08 00:59:04 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state STARTED 2025-09-08 00:59:04.620267 | orchestrator | 2025-09-08 00:59:04 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:04.620280 | orchestrator | 2025-09-08 00:59:04 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:04.620291 | orchestrator | 2025-09-08 00:59:04 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:04.620301 | orchestrator | 2025-09-08 00:59:04 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:04.620311 | orchestrator | 2025-09-08 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:07.404924 | orchestrator | 2025-09-08 00:59:07.405071 | orchestrator | 2025-09-08 00:59:07.405086 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-08 00:59:07.405099 | orchestrator | 2025-09-08 00:59:07.405111 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-08 00:59:07.405122 | orchestrator | Monday 08 September 2025 00:58:01 +0000 (0:00:00.256) 0:00:00.256 ****** 2025-09-08 00:59:07.405133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-08 00:59:07.405147 | orchestrator | 2025-09-08 00:59:07.405158 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-08 00:59:07.405169 | orchestrator | Monday 08 September 2025 00:58:02 +0000 (0:00:00.225) 0:00:00.482 ****** 2025-09-08 00:59:07.405180 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-08 00:59:07.405192 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-08 00:59:07.405203 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-08 00:59:07.405215 | orchestrator | 2025-09-08 00:59:07.405243 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-08 00:59:07.405254 | orchestrator | Monday 08 September 2025 00:58:03 +0000 (0:00:01.219) 0:00:01.701 ****** 2025-09-08 00:59:07.405266 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-08 00:59:07.405277 | orchestrator | 2025-09-08 00:59:07.405288 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-08 00:59:07.405928 | orchestrator | Monday 08 September 2025 00:58:04 +0000 (0:00:01.226) 0:00:02.928 ****** 2025-09-08 00:59:07.405951 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:07.405963 | orchestrator | 2025-09-08 00:59:07.405974 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-08 00:59:07.405986 | orchestrator | Monday 08 September 2025 00:58:05 +0000 (0:00:01.032) 0:00:03.961 ****** 2025-09-08 00:59:07.405997 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:07.406008 | orchestrator | 2025-09-08 00:59:07.406074 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-08 00:59:07.406086 | orchestrator | Monday 08 September 2025 00:58:06 +0000 (0:00:00.908) 0:00:04.869 ****** 2025-09-08 00:59:07.406097 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-08 00:59:07.406108 | orchestrator | ok: [testbed-manager] 2025-09-08 00:59:07.406119 | orchestrator | 2025-09-08 00:59:07.406130 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-08 00:59:07.406141 | orchestrator | Monday 08 September 2025 00:58:43 +0000 (0:00:37.059) 0:00:41.928 ****** 2025-09-08 00:59:07.406152 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-08 00:59:07.406164 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-08 00:59:07.406175 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-08 00:59:07.406186 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-08 00:59:07.406197 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-08 00:59:07.406207 | orchestrator | 2025-09-08 00:59:07.406218 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-08 00:59:07.406229 | orchestrator | Monday 08 September 2025 00:58:47 +0000 (0:00:04.080) 0:00:46.009 ****** 2025-09-08 00:59:07.406240 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-08 00:59:07.406251 | orchestrator | 2025-09-08 00:59:07.406262 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-08 00:59:07.406273 | orchestrator | Monday 08 September 2025 00:58:48 +0000 (0:00:00.448) 0:00:46.457 ****** 2025-09-08 00:59:07.406284 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:59:07.406295 | orchestrator | 2025-09-08 00:59:07.406306 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-08 00:59:07.406316 | orchestrator | Monday 08 September 2025 00:58:48 +0000 (0:00:00.131) 0:00:46.589 ****** 2025-09-08 00:59:07.406340 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:59:07.406351 | orchestrator | 2025-09-08 00:59:07.406362 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-08 00:59:07.406373 | orchestrator | Monday 08 September 2025 00:58:48 +0000 (0:00:00.292) 0:00:46.881 ****** 2025-09-08 00:59:07.406384 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:07.406413 | orchestrator | 2025-09-08 00:59:07.406425 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-08 00:59:07.406436 | orchestrator | Monday 08 September 2025 00:58:50 +0000 (0:00:02.026) 0:00:48.907 ****** 2025-09-08 00:59:07.406447 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:07.406458 | orchestrator | 2025-09-08 00:59:07.406469 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-08 00:59:07.406479 | orchestrator | Monday 08 September 2025 00:58:51 +0000 (0:00:00.756) 0:00:49.664 ****** 2025-09-08 00:59:07.406490 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:07.406501 | orchestrator | 2025-09-08 00:59:07.406512 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-08 00:59:07.406523 | orchestrator | Monday 08 September 2025 00:58:51 +0000 (0:00:00.642) 0:00:50.306 ****** 2025-09-08 00:59:07.406535 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-08 00:59:07.406545 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-08 00:59:07.406556 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-08 00:59:07.406567 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-08 00:59:07.406578 | orchestrator | 2025-09-08 00:59:07.406589 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:59:07.406601 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:59:07.406613 | orchestrator | 2025-09-08 00:59:07.406624 | orchestrator | 2025-09-08 00:59:07.406687 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:59:07.406700 | orchestrator | Monday 08 September 2025 00:58:53 +0000 (0:00:01.500) 0:00:51.807 ****** 2025-09-08 00:59:07.406711 | orchestrator | =============================================================================== 2025-09-08 00:59:07.406722 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.06s 2025-09-08 00:59:07.406733 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.08s 2025-09-08 00:59:07.406743 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.03s 2025-09-08 00:59:07.406754 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2025-09-08 00:59:07.406765 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.23s 2025-09-08 00:59:07.406776 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-09-08 00:59:07.406786 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.03s 2025-09-08 00:59:07.406797 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2025-09-08 00:59:07.406816 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2025-09-08 00:59:07.406827 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2025-09-08 00:59:07.406838 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-09-08 00:59:07.406848 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-09-08 00:59:07.406859 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-09-08 00:59:07.406869 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-08 00:59:07.406880 | orchestrator | 2025-09-08 00:59:07.406891 | orchestrator | 2025-09-08 00:59:07.406901 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:59:07.406912 | orchestrator | 2025-09-08 00:59:07.406936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:59:07.406947 | orchestrator | Monday 08 September 2025 00:58:57 +0000 (0:00:00.177) 0:00:00.177 ****** 2025-09-08 00:59:07.406958 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.406969 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.406979 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.406990 | orchestrator | 2025-09-08 00:59:07.407001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:59:07.407011 | orchestrator | Monday 08 September 2025 00:58:58 +0000 (0:00:00.314) 0:00:00.491 ****** 2025-09-08 00:59:07.407022 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-08 00:59:07.407033 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-08 00:59:07.407044 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-08 00:59:07.407054 | orchestrator | 2025-09-08 00:59:07.407065 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-08 00:59:07.407076 | orchestrator | 2025-09-08 00:59:07.407087 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-08 00:59:07.407097 | orchestrator | Monday 08 September 2025 00:58:58 +0000 (0:00:00.701) 0:00:01.193 ****** 2025-09-08 00:59:07.407108 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.407119 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.407130 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.407140 | orchestrator | 2025-09-08 00:59:07.407151 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:59:07.407163 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:59:07.407174 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:59:07.407185 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:59:07.407196 | orchestrator | 2025-09-08 00:59:07.407206 | orchestrator | 2025-09-08 00:59:07.407217 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:59:07.407228 | orchestrator | Monday 08 September 2025 00:58:59 +0000 (0:00:00.688) 0:00:01.882 ****** 2025-09-08 00:59:07.407239 | orchestrator | =============================================================================== 2025-09-08 00:59:07.407249 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-09-08 00:59:07.407260 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.69s 2025-09-08 00:59:07.407271 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-08 00:59:07.407281 | orchestrator | 2025-09-08 00:59:07.407292 | orchestrator | 2025-09-08 00:59:07.407302 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:59:07.407313 | orchestrator | 2025-09-08 00:59:07.407324 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:59:07.407334 | orchestrator | Monday 08 September 2025 00:56:21 +0000 (0:00:00.268) 0:00:00.268 ****** 2025-09-08 00:59:07.407345 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.407356 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.407366 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.407377 | orchestrator | 2025-09-08 00:59:07.407388 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:59:07.407429 | orchestrator | Monday 08 September 2025 00:56:22 +0000 (0:00:00.295) 0:00:00.564 ****** 2025-09-08 00:59:07.407440 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-08 00:59:07.407451 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-08 00:59:07.407462 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-08 00:59:07.407473 | orchestrator | 2025-09-08 00:59:07.407484 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-08 00:59:07.407501 | orchestrator | 2025-09-08 00:59:07.407546 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:07.407559 | orchestrator | Monday 08 September 2025 00:56:22 +0000 (0:00:00.418) 0:00:00.982 ****** 2025-09-08 00:59:07.407569 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:59:07.407580 | orchestrator | 2025-09-08 00:59:07.407591 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-08 00:59:07.407602 | orchestrator | Monday 08 September 2025 00:56:23 +0000 (0:00:00.584) 0:00:01.566 ****** 2025-09-08 00:59:07.407623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.407641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.407654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.407667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.407721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.407735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.407748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.407761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.407772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.407784 | orchestrator | 2025-09-08 00:59:07.407795 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-08 00:59:07.407846 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:01.820) 0:00:03.387 ****** 2025-09-08 00:59:07.407858 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-08 00:59:07.407870 | orchestrator | 2025-09-08 00:59:07.407880 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-08 00:59:07.407899 | orchestrator | Monday 08 September 2025 00:56:25 +0000 (0:00:00.830) 0:00:04.217 ****** 2025-09-08 00:59:07.407910 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.407921 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.407932 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.407943 | orchestrator | 2025-09-08 00:59:07.407954 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-08 00:59:07.407964 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.525) 0:00:04.743 ****** 2025-09-08 00:59:07.407975 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:59:07.407986 | orchestrator | 2025-09-08 00:59:07.407997 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:07.408007 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.703) 0:00:05.446 ****** 2025-09-08 00:59:07.408018 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:59:07.408029 | orchestrator | 2025-09-08 00:59:07.408072 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-08 00:59:07.408085 | orchestrator | Monday 08 September 2025 00:56:27 +0000 (0:00:00.534) 0:00:05.981 ****** 2025-09-08 00:59:07.408103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408238 | orchestrator | 2025-09-08 00:59:07.408249 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-08 00:59:07.408260 | orchestrator | Monday 08 September 2025 00:56:30 +0000 (0:00:03.217) 0:00:09.198 ****** 2025-09-08 00:59:07.408272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:07.408293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:07.408328 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.408340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:07.408352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:07.408383 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.408421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:07.408439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:07.408462 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.408473 | orchestrator | 2025-09-08 00:59:07.408484 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-08 00:59:07.408495 | orchestrator | Monday 08 September 2025 00:56:31 +0000 (0:00:00.784) 0:00:09.983 ****** 2025-09-08 00:59:07.408507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:07.408526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:07.408548 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.408566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-09-08 00:59:07 | INFO  | Task fb3fe975-8a9f-4781-85d1-709d19922556 is in state SUCCESS 2025-09-08 00:59:07.408586 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:07.408599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:07.408627 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.408639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:07.408651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:07.408679 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.408690 | orchestrator | 2025-09-08 00:59:07.408701 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-08 00:59:07.408712 | orchestrator | Monday 08 September 2025 00:56:32 +0000 (0:00:00.751) 0:00:10.734 ****** 2025-09-08 00:59:07.408728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.408860 | orchestrator | 2025-09-08 00:59:07.408871 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-08 00:59:07.408882 | orchestrator | Monday 08 September 2025 00:56:35 +0000 (0:00:03.381) 0:00:14.115 ****** 2025-09-08 00:59:07.408901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.408979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.408990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.409007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.409025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.409036 | orchestrator | 2025-09-08 00:59:07.409048 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-08 00:59:07.409058 | orchestrator | Monday 08 September 2025 00:56:40 +0000 (0:00:05.287) 0:00:19.403 ****** 2025-09-08 00:59:07.409069 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.409080 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:07.409091 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:07.409102 | orchestrator | 2025-09-08 00:59:07.409112 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-08 00:59:07.409123 | orchestrator | Monday 08 September 2025 00:56:42 +0000 (0:00:01.463) 0:00:20.867 ****** 2025-09-08 00:59:07.409134 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.409144 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.409155 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.409165 | orchestrator | 2025-09-08 00:59:07.409176 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-08 00:59:07.409186 | orchestrator | Monday 08 September 2025 00:56:42 +0000 (0:00:00.567) 0:00:21.434 ****** 2025-09-08 00:59:07.409197 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.409208 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.409218 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.409229 | orchestrator | 2025-09-08 00:59:07.409240 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-08 00:59:07.409250 | orchestrator | Monday 08 September 2025 00:56:43 +0000 (0:00:00.324) 0:00:21.758 ****** 2025-09-08 00:59:07.409261 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.409272 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.409282 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.409293 | orchestrator | 2025-09-08 00:59:07.409303 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-08 00:59:07.409314 | orchestrator | Monday 08 September 2025 00:56:43 +0000 (0:00:00.507) 0:00:22.266 ****** 2025-09-08 00:59:07.409332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.409344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.409371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.409384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.409412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.409424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:07.409443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.409467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.409478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.409489 | orchestrator | 2025-09-08 00:59:07.409500 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:07.409511 | orchestrator | Monday 08 September 2025 00:56:46 +0000 (0:00:02.513) 0:00:24.779 ****** 2025-09-08 00:59:07.409522 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.409533 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.409543 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.409554 | orchestrator | 2025-09-08 00:59:07.409565 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-08 00:59:07.409575 | orchestrator | Monday 08 September 2025 00:56:46 +0000 (0:00:00.383) 0:00:25.163 ****** 2025-09-08 00:59:07.409586 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-08 00:59:07.409597 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-08 00:59:07.409608 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-08 00:59:07.409619 | orchestrator | 2025-09-08 00:59:07.409629 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-08 00:59:07.409640 | orchestrator | Monday 08 September 2025 00:56:48 +0000 (0:00:01.504) 0:00:26.667 ****** 2025-09-08 00:59:07.409650 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:59:07.409661 | orchestrator | 2025-09-08 00:59:07.409672 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-08 00:59:07.409682 | orchestrator | Monday 08 September 2025 00:56:48 +0000 (0:00:00.856) 0:00:27.524 ****** 2025-09-08 00:59:07.409693 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.409704 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.409714 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.409725 | orchestrator | 2025-09-08 00:59:07.409735 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-08 00:59:07.409746 | orchestrator | Monday 08 September 2025 00:56:49 +0000 (0:00:00.746) 0:00:28.271 ****** 2025-09-08 00:59:07.409757 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:59:07.409768 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 00:59:07.409778 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 00:59:07.409789 | orchestrator | 2025-09-08 00:59:07.409800 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-08 00:59:07.409817 | orchestrator | Monday 08 September 2025 00:56:50 +0000 (0:00:01.056) 0:00:29.327 ****** 2025-09-08 00:59:07.409828 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.409839 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.409849 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.409860 | orchestrator | 2025-09-08 00:59:07.409871 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-08 00:59:07.409882 | orchestrator | Monday 08 September 2025 00:56:51 +0000 (0:00:00.307) 0:00:29.634 ****** 2025-09-08 00:59:07.409893 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-08 00:59:07.409903 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-08 00:59:07.409914 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-08 00:59:07.409924 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-08 00:59:07.409941 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-08 00:59:07.409952 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-08 00:59:07.409963 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-08 00:59:07.409974 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-08 00:59:07.409984 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-08 00:59:07.409995 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-08 00:59:07.410006 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-08 00:59:07.410062 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-08 00:59:07.410081 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-08 00:59:07.410093 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-08 00:59:07.410104 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-08 00:59:07.410115 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 00:59:07.410126 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 00:59:07.410137 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 00:59:07.410147 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 00:59:07.410158 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 00:59:07.410169 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 00:59:07.410180 | orchestrator | 2025-09-08 00:59:07.410190 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-08 00:59:07.410201 | orchestrator | Monday 08 September 2025 00:57:00 +0000 (0:00:08.981) 0:00:38.616 ****** 2025-09-08 00:59:07.410212 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 00:59:07.410223 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 00:59:07.410233 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 00:59:07.410244 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 00:59:07.410255 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 00:59:07.410273 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 00:59:07.410284 | orchestrator | 2025-09-08 00:59:07.410295 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-08 00:59:07.410305 | orchestrator | Monday 08 September 2025 00:57:03 +0000 (0:00:02.970) 0:00:41.586 ****** 2025-09-08 00:59:07.410317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.410339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.410357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:07.410369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.410388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.410417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:07.410428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.410446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.410462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:07.410473 | orchestrator | 2025-09-08 00:59:07.410485 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:07.410496 | orchestrator | Monday 08 September 2025 00:57:05 +0000 (0:00:02.464) 0:00:44.051 ****** 2025-09-08 00:59:07.410507 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.410518 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.410529 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.410539 | orchestrator | 2025-09-08 00:59:07.410550 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-08 00:59:07.410561 | orchestrator | Monday 08 September 2025 00:57:05 +0000 (0:00:00.296) 0:00:44.347 ****** 2025-09-08 00:59:07.410572 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.410583 | orchestrator | 2025-09-08 00:59:07.410594 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-08 00:59:07.410611 | orchestrator | Monday 08 September 2025 00:57:08 +0000 (0:00:02.290) 0:00:46.638 ****** 2025-09-08 00:59:07.410622 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.410633 | orchestrator | 2025-09-08 00:59:07.410644 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-08 00:59:07.410655 | orchestrator | Monday 08 September 2025 00:57:10 +0000 (0:00:02.295) 0:00:48.934 ****** 2025-09-08 00:59:07.410665 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.410676 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.410687 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.410698 | orchestrator | 2025-09-08 00:59:07.410709 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-08 00:59:07.410719 | orchestrator | Monday 08 September 2025 00:57:11 +0000 (0:00:00.945) 0:00:49.879 ****** 2025-09-08 00:59:07.410730 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.410748 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.410767 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.410785 | orchestrator | 2025-09-08 00:59:07.410806 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-08 00:59:07.410826 | orchestrator | Monday 08 September 2025 00:57:12 +0000 (0:00:00.721) 0:00:50.601 ****** 2025-09-08 00:59:07.410841 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.410852 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.410862 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.410873 | orchestrator | 2025-09-08 00:59:07.410884 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-08 00:59:07.410894 | orchestrator | Monday 08 September 2025 00:57:12 +0000 (0:00:00.387) 0:00:50.988 ****** 2025-09-08 00:59:07.410905 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.410916 | orchestrator | 2025-09-08 00:59:07.410926 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-08 00:59:07.410937 | orchestrator | Monday 08 September 2025 00:57:26 +0000 (0:00:14.373) 0:01:05.362 ****** 2025-09-08 00:59:07.410948 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.410958 | orchestrator | 2025-09-08 00:59:07.410969 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-08 00:59:07.410980 | orchestrator | Monday 08 September 2025 00:57:35 +0000 (0:00:08.841) 0:01:14.204 ****** 2025-09-08 00:59:07.410991 | orchestrator | 2025-09-08 00:59:07.411001 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-08 00:59:07.411012 | orchestrator | Monday 08 September 2025 00:57:35 +0000 (0:00:00.066) 0:01:14.271 ****** 2025-09-08 00:59:07.411023 | orchestrator | 2025-09-08 00:59:07.411034 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-08 00:59:07.411044 | orchestrator | Monday 08 September 2025 00:57:35 +0000 (0:00:00.065) 0:01:14.336 ****** 2025-09-08 00:59:07.411055 | orchestrator | 2025-09-08 00:59:07.411066 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-08 00:59:07.411077 | orchestrator | Monday 08 September 2025 00:57:35 +0000 (0:00:00.076) 0:01:14.413 ****** 2025-09-08 00:59:07.411088 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.411098 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:07.411109 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:07.411120 | orchestrator | 2025-09-08 00:59:07.411131 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-08 00:59:07.411141 | orchestrator | Monday 08 September 2025 00:57:57 +0000 (0:00:22.080) 0:01:36.493 ****** 2025-09-08 00:59:07.411152 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.411163 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:07.411173 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:07.411184 | orchestrator | 2025-09-08 00:59:07.411195 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-08 00:59:07.411212 | orchestrator | Monday 08 September 2025 00:58:08 +0000 (0:00:10.308) 0:01:46.802 ****** 2025-09-08 00:59:07.411230 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.411241 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:07.411252 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:07.411265 | orchestrator | 2025-09-08 00:59:07.411283 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:07.411301 | orchestrator | Monday 08 September 2025 00:58:20 +0000 (0:00:12.316) 0:01:59.118 ****** 2025-09-08 00:59:07.411320 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:59:07.411338 | orchestrator | 2025-09-08 00:59:07.411349 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-08 00:59:07.411360 | orchestrator | Monday 08 September 2025 00:58:21 +0000 (0:00:00.744) 0:01:59.863 ****** 2025-09-08 00:59:07.411371 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:07.411381 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.411392 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:07.411454 | orchestrator | 2025-09-08 00:59:07.411465 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-08 00:59:07.411483 | orchestrator | Monday 08 September 2025 00:58:22 +0000 (0:00:00.736) 0:02:00.599 ****** 2025-09-08 00:59:07.411494 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:07.411505 | orchestrator | 2025-09-08 00:59:07.411516 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-08 00:59:07.411527 | orchestrator | Monday 08 September 2025 00:58:23 +0000 (0:00:01.818) 0:02:02.417 ****** 2025-09-08 00:59:07.411538 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-08 00:59:07.411549 | orchestrator | 2025-09-08 00:59:07.411559 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-08 00:59:07.411570 | orchestrator | Monday 08 September 2025 00:58:33 +0000 (0:00:09.849) 0:02:12.267 ****** 2025-09-08 00:59:07.411581 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-08 00:59:07.411592 | orchestrator | 2025-09-08 00:59:07.411603 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-08 00:59:07.411613 | orchestrator | Monday 08 September 2025 00:58:53 +0000 (0:00:20.290) 0:02:32.558 ****** 2025-09-08 00:59:07.411624 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-08 00:59:07.411635 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-08 00:59:07.411646 | orchestrator | 2025-09-08 00:59:07.411657 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-08 00:59:07.411668 | orchestrator | Monday 08 September 2025 00:58:59 +0000 (0:00:05.972) 0:02:38.530 ****** 2025-09-08 00:59:07.411678 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.411689 | orchestrator | 2025-09-08 00:59:07.411700 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-08 00:59:07.411711 | orchestrator | Monday 08 September 2025 00:59:00 +0000 (0:00:00.142) 0:02:38.673 ****** 2025-09-08 00:59:07.411722 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.411732 | orchestrator | 2025-09-08 00:59:07.411743 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-08 00:59:07.411754 | orchestrator | Monday 08 September 2025 00:59:00 +0000 (0:00:00.250) 0:02:38.924 ****** 2025-09-08 00:59:07.411765 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.411775 | orchestrator | 2025-09-08 00:59:07.411786 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-08 00:59:07.411797 | orchestrator | Monday 08 September 2025 00:59:00 +0000 (0:00:00.275) 0:02:39.200 ****** 2025-09-08 00:59:07.411808 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.411819 | orchestrator | 2025-09-08 00:59:07.411829 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-08 00:59:07.411840 | orchestrator | Monday 08 September 2025 00:59:01 +0000 (0:00:00.647) 0:02:39.847 ****** 2025-09-08 00:59:07.411859 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:07.411870 | orchestrator | 2025-09-08 00:59:07.411881 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:07.411892 | orchestrator | Monday 08 September 2025 00:59:04 +0000 (0:00:02.840) 0:02:42.688 ****** 2025-09-08 00:59:07.411903 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:07.411913 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:07.411924 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:07.411935 | orchestrator | 2025-09-08 00:59:07.411945 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:59:07.411957 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-08 00:59:07.411968 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-08 00:59:07.411978 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-08 00:59:07.411988 | orchestrator | 2025-09-08 00:59:07.411997 | orchestrator | 2025-09-08 00:59:07.412007 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:59:07.412016 | orchestrator | Monday 08 September 2025 00:59:04 +0000 (0:00:00.445) 0:02:43.133 ****** 2025-09-08 00:59:07.412026 | orchestrator | =============================================================================== 2025-09-08 00:59:07.412036 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.08s 2025-09-08 00:59:07.412045 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.29s 2025-09-08 00:59:07.412055 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.37s 2025-09-08 00:59:07.412071 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.32s 2025-09-08 00:59:07.412081 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.31s 2025-09-08 00:59:07.412091 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.85s 2025-09-08 00:59:07.412101 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.98s 2025-09-08 00:59:07.412110 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.84s 2025-09-08 00:59:07.412120 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.97s 2025-09-08 00:59:07.412130 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.29s 2025-09-08 00:59:07.412139 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.38s 2025-09-08 00:59:07.412149 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.22s 2025-09-08 00:59:07.412158 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.97s 2025-09-08 00:59:07.412172 | orchestrator | keystone : Creating default user role ----------------------------------- 2.84s 2025-09-08 00:59:07.412182 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.51s 2025-09-08 00:59:07.412192 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2025-09-08 00:59:07.412202 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.30s 2025-09-08 00:59:07.412211 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.29s 2025-09-08 00:59:07.412221 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.82s 2025-09-08 00:59:07.412230 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.82s 2025-09-08 00:59:07.412240 | orchestrator | 2025-09-08 00:59:07 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:07.412250 | orchestrator | 2025-09-08 00:59:07 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:07.412259 | orchestrator | 2025-09-08 00:59:07 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:07.412273 | orchestrator | 2025-09-08 00:59:07 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:07.412283 | orchestrator | 2025-09-08 00:59:07 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:07.412293 | orchestrator | 2025-09-08 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:10.427702 | orchestrator | 2025-09-08 00:59:10 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:10.431349 | orchestrator | 2025-09-08 00:59:10 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:10.431832 | orchestrator | 2025-09-08 00:59:10 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:10.432787 | orchestrator | 2025-09-08 00:59:10 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:10.433771 | orchestrator | 2025-09-08 00:59:10 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:10.434910 | orchestrator | 2025-09-08 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:13.464590 | orchestrator | 2025-09-08 00:59:13 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:13.464730 | orchestrator | 2025-09-08 00:59:13 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:13.465278 | orchestrator | 2025-09-08 00:59:13 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:13.466767 | orchestrator | 2025-09-08 00:59:13 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:13.468574 | orchestrator | 2025-09-08 00:59:13 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:13.468597 | orchestrator | 2025-09-08 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:16.513989 | orchestrator | 2025-09-08 00:59:16 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:16.517298 | orchestrator | 2025-09-08 00:59:16 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:16.519953 | orchestrator | 2025-09-08 00:59:16 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:16.521739 | orchestrator | 2025-09-08 00:59:16 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:16.523525 | orchestrator | 2025-09-08 00:59:16 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:16.523690 | orchestrator | 2025-09-08 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:19.579206 | orchestrator | 2025-09-08 00:59:19 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:19.579327 | orchestrator | 2025-09-08 00:59:19 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:19.581085 | orchestrator | 2025-09-08 00:59:19 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:19.583876 | orchestrator | 2025-09-08 00:59:19 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:19.585578 | orchestrator | 2025-09-08 00:59:19 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:19.585601 | orchestrator | 2025-09-08 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:22.613028 | orchestrator | 2025-09-08 00:59:22 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:22.613191 | orchestrator | 2025-09-08 00:59:22 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:22.613890 | orchestrator | 2025-09-08 00:59:22 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:22.614786 | orchestrator | 2025-09-08 00:59:22 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:22.615564 | orchestrator | 2025-09-08 00:59:22 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:22.615584 | orchestrator | 2025-09-08 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:25.654097 | orchestrator | 2025-09-08 00:59:25 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:25.654364 | orchestrator | 2025-09-08 00:59:25 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:25.655060 | orchestrator | 2025-09-08 00:59:25 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:25.655949 | orchestrator | 2025-09-08 00:59:25 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:25.656734 | orchestrator | 2025-09-08 00:59:25 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:25.656761 | orchestrator | 2025-09-08 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:28.787605 | orchestrator | 2025-09-08 00:59:28 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:28.787740 | orchestrator | 2025-09-08 00:59:28 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:28.788434 | orchestrator | 2025-09-08 00:59:28 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:28.791145 | orchestrator | 2025-09-08 00:59:28 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:28.792194 | orchestrator | 2025-09-08 00:59:28 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:28.792219 | orchestrator | 2025-09-08 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:31.824497 | orchestrator | 2025-09-08 00:59:31 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:31.824722 | orchestrator | 2025-09-08 00:59:31 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:31.825347 | orchestrator | 2025-09-08 00:59:31 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:31.826189 | orchestrator | 2025-09-08 00:59:31 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:31.827527 | orchestrator | 2025-09-08 00:59:31 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:31.827552 | orchestrator | 2025-09-08 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:34.897040 | orchestrator | 2025-09-08 00:59:34 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:34.897156 | orchestrator | 2025-09-08 00:59:34 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:34.897169 | orchestrator | 2025-09-08 00:59:34 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:34.897179 | orchestrator | 2025-09-08 00:59:34 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:34.897189 | orchestrator | 2025-09-08 00:59:34 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:34.897232 | orchestrator | 2025-09-08 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:38.010195 | orchestrator | 2025-09-08 00:59:38 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state STARTED 2025-09-08 00:59:38.010329 | orchestrator | 2025-09-08 00:59:38 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:38.010962 | orchestrator | 2025-09-08 00:59:38 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:38.011599 | orchestrator | 2025-09-08 00:59:38 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:38.016760 | orchestrator | 2025-09-08 00:59:38 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:38.016805 | orchestrator | 2025-09-08 00:59:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:41.062528 | orchestrator | 2025-09-08 00:59:41 | INFO  | Task eaeb87a6-39ff-4648-ad1a-08badec2e9c1 is in state SUCCESS 2025-09-08 00:59:41.062645 | orchestrator | 2025-09-08 00:59:41 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:41.062660 | orchestrator | 2025-09-08 00:59:41 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:41.062671 | orchestrator | 2025-09-08 00:59:41 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:41.062683 | orchestrator | 2025-09-08 00:59:41 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:41.062694 | orchestrator | 2025-09-08 00:59:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:44.085978 | orchestrator | 2025-09-08 00:59:44 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:44.086146 | orchestrator | 2025-09-08 00:59:44 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:44.086548 | orchestrator | 2025-09-08 00:59:44 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 00:59:44.088665 | orchestrator | 2025-09-08 00:59:44 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:44.089084 | orchestrator | 2025-09-08 00:59:44 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:44.089164 | orchestrator | 2025-09-08 00:59:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:47.108462 | orchestrator | 2025-09-08 00:59:47 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:47.108699 | orchestrator | 2025-09-08 00:59:47 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:47.109415 | orchestrator | 2025-09-08 00:59:47 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 00:59:47.110078 | orchestrator | 2025-09-08 00:59:47 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:47.110844 | orchestrator | 2025-09-08 00:59:47 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:47.110867 | orchestrator | 2025-09-08 00:59:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:50.145808 | orchestrator | 2025-09-08 00:59:50 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:50.145920 | orchestrator | 2025-09-08 00:59:50 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:50.146433 | orchestrator | 2025-09-08 00:59:50 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 00:59:50.147109 | orchestrator | 2025-09-08 00:59:50 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:50.147698 | orchestrator | 2025-09-08 00:59:50 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:50.147801 | orchestrator | 2025-09-08 00:59:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:53.174112 | orchestrator | 2025-09-08 00:59:53 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:53.174264 | orchestrator | 2025-09-08 00:59:53 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:53.175525 | orchestrator | 2025-09-08 00:59:53 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 00:59:53.176045 | orchestrator | 2025-09-08 00:59:53 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:53.176672 | orchestrator | 2025-09-08 00:59:53 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:53.176768 | orchestrator | 2025-09-08 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:56.197162 | orchestrator | 2025-09-08 00:59:56 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:56.197366 | orchestrator | 2025-09-08 00:59:56 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:56.197843 | orchestrator | 2025-09-08 00:59:56 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 00:59:56.198368 | orchestrator | 2025-09-08 00:59:56 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:56.198953 | orchestrator | 2025-09-08 00:59:56 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:56.198993 | orchestrator | 2025-09-08 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:59.221032 | orchestrator | 2025-09-08 00:59:59 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 00:59:59.221151 | orchestrator | 2025-09-08 00:59:59 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 00:59:59.221571 | orchestrator | 2025-09-08 00:59:59 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 00:59:59.222149 | orchestrator | 2025-09-08 00:59:59 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 00:59:59.222706 | orchestrator | 2025-09-08 00:59:59 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 00:59:59.222728 | orchestrator | 2025-09-08 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:02.247477 | orchestrator | 2025-09-08 01:00:02 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:02.249178 | orchestrator | 2025-09-08 01:00:02 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:02.249209 | orchestrator | 2025-09-08 01:00:02 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:02.249222 | orchestrator | 2025-09-08 01:00:02 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:02.249678 | orchestrator | 2025-09-08 01:00:02 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 01:00:02.249772 | orchestrator | 2025-09-08 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:05.274768 | orchestrator | 2025-09-08 01:00:05 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:05.274950 | orchestrator | 2025-09-08 01:00:05 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:05.275648 | orchestrator | 2025-09-08 01:00:05 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:05.276210 | orchestrator | 2025-09-08 01:00:05 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:05.276778 | orchestrator | 2025-09-08 01:00:05 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 01:00:05.276799 | orchestrator | 2025-09-08 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:08.305143 | orchestrator | 2025-09-08 01:00:08 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:08.305553 | orchestrator | 2025-09-08 01:00:08 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:08.306367 | orchestrator | 2025-09-08 01:00:08 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:08.307997 | orchestrator | 2025-09-08 01:00:08 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:08.308692 | orchestrator | 2025-09-08 01:00:08 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 01:00:08.308735 | orchestrator | 2025-09-08 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:11.343607 | orchestrator | 2025-09-08 01:00:11 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:11.343812 | orchestrator | 2025-09-08 01:00:11 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:11.344733 | orchestrator | 2025-09-08 01:00:11 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:11.345218 | orchestrator | 2025-09-08 01:00:11 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:11.345874 | orchestrator | 2025-09-08 01:00:11 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 01:00:11.346167 | orchestrator | 2025-09-08 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:14.372818 | orchestrator | 2025-09-08 01:00:14 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:14.372912 | orchestrator | 2025-09-08 01:00:14 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:14.373346 | orchestrator | 2025-09-08 01:00:14 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:14.374762 | orchestrator | 2025-09-08 01:00:14 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:14.375490 | orchestrator | 2025-09-08 01:00:14 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 01:00:14.375514 | orchestrator | 2025-09-08 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:17.402486 | orchestrator | 2025-09-08 01:00:17 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:17.402710 | orchestrator | 2025-09-08 01:00:17 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:17.403345 | orchestrator | 2025-09-08 01:00:17 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:17.403973 | orchestrator | 2025-09-08 01:00:17 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:17.404524 | orchestrator | 2025-09-08 01:00:17 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state STARTED 2025-09-08 01:00:17.404684 | orchestrator | 2025-09-08 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:20.430897 | orchestrator | 2025-09-08 01:00:20.431002 | orchestrator | 2025-09-08 01:00:20.431017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:00:20.431030 | orchestrator | 2025-09-08 01:00:20.431041 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:00:20.431052 | orchestrator | Monday 08 September 2025 00:59:05 +0000 (0:00:00.272) 0:00:00.272 ****** 2025-09-08 01:00:20.431064 | orchestrator | ok: [testbed-manager] 2025-09-08 01:00:20.431076 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:00:20.431086 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:00:20.431097 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:00:20.431108 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:00:20.431119 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:00:20.431129 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:00:20.431140 | orchestrator | 2025-09-08 01:00:20.431151 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:00:20.431161 | orchestrator | Monday 08 September 2025 00:59:06 +0000 (0:00:00.982) 0:00:01.255 ****** 2025-09-08 01:00:20.431173 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:20.431184 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:20.431194 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:20.431205 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:20.431216 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:20.431226 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:20.431237 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:20.431247 | orchestrator | 2025-09-08 01:00:20.431258 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-08 01:00:20.431269 | orchestrator | 2025-09-08 01:00:20.431279 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-08 01:00:20.431290 | orchestrator | Monday 08 September 2025 00:59:07 +0000 (0:00:00.959) 0:00:02.215 ****** 2025-09-08 01:00:20.431301 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:00:20.431314 | orchestrator | 2025-09-08 01:00:20.431324 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-08 01:00:20.431335 | orchestrator | Monday 08 September 2025 00:59:09 +0000 (0:00:02.036) 0:00:04.252 ****** 2025-09-08 01:00:20.431346 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-09-08 01:00:20.431356 | orchestrator | 2025-09-08 01:00:20.431367 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-08 01:00:20.431401 | orchestrator | Monday 08 September 2025 00:59:12 +0000 (0:00:03.370) 0:00:07.622 ****** 2025-09-08 01:00:20.431413 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-08 01:00:20.431425 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-08 01:00:20.431436 | orchestrator | 2025-09-08 01:00:20.431447 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-08 01:00:20.431458 | orchestrator | Monday 08 September 2025 00:59:19 +0000 (0:00:06.098) 0:00:13.720 ****** 2025-09-08 01:00:20.431469 | orchestrator | ok: [testbed-manager] => (item=service) 2025-09-08 01:00:20.431480 | orchestrator | 2025-09-08 01:00:20.431491 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-08 01:00:20.431502 | orchestrator | Monday 08 September 2025 00:59:22 +0000 (0:00:03.024) 0:00:16.744 ****** 2025-09-08 01:00:20.431512 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:00:20.431523 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-09-08 01:00:20.431559 | orchestrator | 2025-09-08 01:00:20.431570 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-08 01:00:20.431581 | orchestrator | Monday 08 September 2025 00:59:25 +0000 (0:00:03.896) 0:00:20.641 ****** 2025-09-08 01:00:20.431592 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-09-08 01:00:20.431602 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-09-08 01:00:20.431613 | orchestrator | 2025-09-08 01:00:20.431624 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-08 01:00:20.431635 | orchestrator | Monday 08 September 2025 00:59:33 +0000 (0:00:07.118) 0:00:27.759 ****** 2025-09-08 01:00:20.431645 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-09-08 01:00:20.431656 | orchestrator | 2025-09-08 01:00:20.431667 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:00:20.431692 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.431704 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.431715 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.431726 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.431737 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.431765 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.431777 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.431787 | orchestrator | 2025-09-08 01:00:20.431798 | orchestrator | 2025-09-08 01:00:20.431809 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:00:20.431820 | orchestrator | Monday 08 September 2025 00:59:39 +0000 (0:00:06.166) 0:00:33.925 ****** 2025-09-08 01:00:20.431831 | orchestrator | =============================================================================== 2025-09-08 01:00:20.431841 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.12s 2025-09-08 01:00:20.431852 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.17s 2025-09-08 01:00:20.431863 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.10s 2025-09-08 01:00:20.431873 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.90s 2025-09-08 01:00:20.431884 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.37s 2025-09-08 01:00:20.431895 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.02s 2025-09-08 01:00:20.431905 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.04s 2025-09-08 01:00:20.431916 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2025-09-08 01:00:20.431926 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2025-09-08 01:00:20.431937 | orchestrator | 2025-09-08 01:00:20.431948 | orchestrator | 2025-09-08 01:00:20.431959 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-08 01:00:20.431969 | orchestrator | 2025-09-08 01:00:20.431980 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-08 01:00:20.431991 | orchestrator | Monday 08 September 2025 00:58:58 +0000 (0:00:00.281) 0:00:00.281 ****** 2025-09-08 01:00:20.432001 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432012 | orchestrator | 2025-09-08 01:00:20.432023 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-08 01:00:20.432041 | orchestrator | Monday 08 September 2025 00:59:00 +0000 (0:00:02.067) 0:00:02.349 ****** 2025-09-08 01:00:20.432051 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432062 | orchestrator | 2025-09-08 01:00:20.432073 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-08 01:00:20.432083 | orchestrator | Monday 08 September 2025 00:59:01 +0000 (0:00:01.275) 0:00:03.625 ****** 2025-09-08 01:00:20.432094 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432105 | orchestrator | 2025-09-08 01:00:20.432116 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-08 01:00:20.432126 | orchestrator | Monday 08 September 2025 00:59:02 +0000 (0:00:00.976) 0:00:04.602 ****** 2025-09-08 01:00:20.432137 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432148 | orchestrator | 2025-09-08 01:00:20.432158 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-08 01:00:20.432169 | orchestrator | Monday 08 September 2025 00:59:04 +0000 (0:00:01.801) 0:00:06.404 ****** 2025-09-08 01:00:20.432180 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432190 | orchestrator | 2025-09-08 01:00:20.432201 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-08 01:00:20.432212 | orchestrator | Monday 08 September 2025 00:59:05 +0000 (0:00:01.361) 0:00:07.765 ****** 2025-09-08 01:00:20.432223 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432233 | orchestrator | 2025-09-08 01:00:20.432244 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-08 01:00:20.432255 | orchestrator | Monday 08 September 2025 00:59:06 +0000 (0:00:00.884) 0:00:08.650 ****** 2025-09-08 01:00:20.432265 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432276 | orchestrator | 2025-09-08 01:00:20.432286 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-08 01:00:20.432297 | orchestrator | Monday 08 September 2025 00:59:08 +0000 (0:00:02.150) 0:00:10.801 ****** 2025-09-08 01:00:20.432308 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432318 | orchestrator | 2025-09-08 01:00:20.432329 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-08 01:00:20.432339 | orchestrator | Monday 08 September 2025 00:59:09 +0000 (0:00:01.083) 0:00:11.884 ****** 2025-09-08 01:00:20.432350 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:20.432361 | orchestrator | 2025-09-08 01:00:20.432371 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-08 01:00:20.432400 | orchestrator | Monday 08 September 2025 00:59:53 +0000 (0:00:44.294) 0:00:56.179 ****** 2025-09-08 01:00:20.432411 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:00:20.432421 | orchestrator | 2025-09-08 01:00:20.432437 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-08 01:00:20.432448 | orchestrator | 2025-09-08 01:00:20.432459 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-08 01:00:20.432470 | orchestrator | Monday 08 September 2025 00:59:54 +0000 (0:00:00.178) 0:00:56.358 ****** 2025-09-08 01:00:20.432481 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:00:20.432492 | orchestrator | 2025-09-08 01:00:20.432502 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-08 01:00:20.432513 | orchestrator | 2025-09-08 01:00:20.432524 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-08 01:00:20.432534 | orchestrator | Monday 08 September 2025 01:00:05 +0000 (0:00:11.575) 0:01:07.934 ****** 2025-09-08 01:00:20.432545 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:00:20.432556 | orchestrator | 2025-09-08 01:00:20.432567 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-08 01:00:20.432577 | orchestrator | 2025-09-08 01:00:20.432588 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-08 01:00:20.432599 | orchestrator | Monday 08 September 2025 01:00:07 +0000 (0:00:01.333) 0:01:09.267 ****** 2025-09-08 01:00:20.432610 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:00:20.432627 | orchestrator | 2025-09-08 01:00:20.432645 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:00:20.432657 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 01:00:20.432668 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.432679 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.432690 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:20.432701 | orchestrator | 2025-09-08 01:00:20.432712 | orchestrator | 2025-09-08 01:00:20.432722 | orchestrator | 2025-09-08 01:00:20.432733 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:00:20.432744 | orchestrator | Monday 08 September 2025 01:00:18 +0000 (0:00:11.127) 0:01:20.394 ****** 2025-09-08 01:00:20.432755 | orchestrator | =============================================================================== 2025-09-08 01:00:20.432765 | orchestrator | Create admin user ------------------------------------------------------ 44.29s 2025-09-08 01:00:20.432776 | orchestrator | Restart ceph manager service ------------------------------------------- 24.04s 2025-09-08 01:00:20.432787 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.15s 2025-09-08 01:00:20.432797 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.07s 2025-09-08 01:00:20.432808 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.80s 2025-09-08 01:00:20.432819 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.36s 2025-09-08 01:00:20.432829 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.28s 2025-09-08 01:00:20.432840 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2025-09-08 01:00:20.432851 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.98s 2025-09-08 01:00:20.432861 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.88s 2025-09-08 01:00:20.432872 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-09-08 01:00:20.432883 | orchestrator | 2025-09-08 01:00:20 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:20.432894 | orchestrator | 2025-09-08 01:00:20 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:20.432905 | orchestrator | 2025-09-08 01:00:20 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:20.432916 | orchestrator | 2025-09-08 01:00:20 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:20.432927 | orchestrator | 2025-09-08 01:00:20 | INFO  | Task 3d701c7e-fa74-45d5-b209-5f56d0b36bcc is in state SUCCESS 2025-09-08 01:00:20.432937 | orchestrator | 2025-09-08 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:23.452506 | orchestrator | 2025-09-08 01:00:23 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:23.452626 | orchestrator | 2025-09-08 01:00:23 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:23.453081 | orchestrator | 2025-09-08 01:00:23 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:23.453674 | orchestrator | 2025-09-08 01:00:23 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:23.453759 | orchestrator | 2025-09-08 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:26.475277 | orchestrator | 2025-09-08 01:00:26 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:26.475536 | orchestrator | 2025-09-08 01:00:26 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:26.476014 | orchestrator | 2025-09-08 01:00:26 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:26.476594 | orchestrator | 2025-09-08 01:00:26 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:26.476614 | orchestrator | 2025-09-08 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:29.501266 | orchestrator | 2025-09-08 01:00:29 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:29.501420 | orchestrator | 2025-09-08 01:00:29 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:29.501629 | orchestrator | 2025-09-08 01:00:29 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:29.502340 | orchestrator | 2025-09-08 01:00:29 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:29.502364 | orchestrator | 2025-09-08 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:32.526454 | orchestrator | 2025-09-08 01:00:32 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:32.526675 | orchestrator | 2025-09-08 01:00:32 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:32.527670 | orchestrator | 2025-09-08 01:00:32 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:32.529419 | orchestrator | 2025-09-08 01:00:32 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:32.529440 | orchestrator | 2025-09-08 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:35.555619 | orchestrator | 2025-09-08 01:00:35 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:35.555828 | orchestrator | 2025-09-08 01:00:35 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:35.556364 | orchestrator | 2025-09-08 01:00:35 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:35.557003 | orchestrator | 2025-09-08 01:00:35 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:35.557026 | orchestrator | 2025-09-08 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:38.588500 | orchestrator | 2025-09-08 01:00:38 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:38.588778 | orchestrator | 2025-09-08 01:00:38 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:38.589514 | orchestrator | 2025-09-08 01:00:38 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:38.590252 | orchestrator | 2025-09-08 01:00:38 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:38.590418 | orchestrator | 2025-09-08 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:41.616119 | orchestrator | 2025-09-08 01:00:41 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:41.617841 | orchestrator | 2025-09-08 01:00:41 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:41.618535 | orchestrator | 2025-09-08 01:00:41 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:41.620194 | orchestrator | 2025-09-08 01:00:41 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:41.620250 | orchestrator | 2025-09-08 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:44.662671 | orchestrator | 2025-09-08 01:00:44 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:44.663239 | orchestrator | 2025-09-08 01:00:44 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:44.666669 | orchestrator | 2025-09-08 01:00:44 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:44.667174 | orchestrator | 2025-09-08 01:00:44 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:44.667197 | orchestrator | 2025-09-08 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:47.714652 | orchestrator | 2025-09-08 01:00:47 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:47.716307 | orchestrator | 2025-09-08 01:00:47 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:47.717456 | orchestrator | 2025-09-08 01:00:47 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:47.718940 | orchestrator | 2025-09-08 01:00:47 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:47.718966 | orchestrator | 2025-09-08 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:50.756257 | orchestrator | 2025-09-08 01:00:50 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:50.756894 | orchestrator | 2025-09-08 01:00:50 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:50.757731 | orchestrator | 2025-09-08 01:00:50 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:50.758515 | orchestrator | 2025-09-08 01:00:50 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:50.758538 | orchestrator | 2025-09-08 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:53.799969 | orchestrator | 2025-09-08 01:00:53 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:53.801131 | orchestrator | 2025-09-08 01:00:53 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:53.802690 | orchestrator | 2025-09-08 01:00:53 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:53.803609 | orchestrator | 2025-09-08 01:00:53 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:53.804488 | orchestrator | 2025-09-08 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:56.946583 | orchestrator | 2025-09-08 01:00:56 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:56.946778 | orchestrator | 2025-09-08 01:00:56 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:56.946806 | orchestrator | 2025-09-08 01:00:56 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:56.948024 | orchestrator | 2025-09-08 01:00:56 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:56.948046 | orchestrator | 2025-09-08 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:59.987578 | orchestrator | 2025-09-08 01:00:59 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:00:59.987920 | orchestrator | 2025-09-08 01:00:59 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:00:59.989480 | orchestrator | 2025-09-08 01:00:59 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:00:59.990426 | orchestrator | 2025-09-08 01:00:59 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:00:59.990459 | orchestrator | 2025-09-08 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:03.036143 | orchestrator | 2025-09-08 01:01:03 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:03.041105 | orchestrator | 2025-09-08 01:01:03 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:03.045648 | orchestrator | 2025-09-08 01:01:03 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:03.048357 | orchestrator | 2025-09-08 01:01:03 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:03.048424 | orchestrator | 2025-09-08 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:06.090189 | orchestrator | 2025-09-08 01:01:06 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:06.090937 | orchestrator | 2025-09-08 01:01:06 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:06.092785 | orchestrator | 2025-09-08 01:01:06 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:06.094106 | orchestrator | 2025-09-08 01:01:06 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:06.094131 | orchestrator | 2025-09-08 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:09.146201 | orchestrator | 2025-09-08 01:01:09 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:09.147637 | orchestrator | 2025-09-08 01:01:09 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:09.149639 | orchestrator | 2025-09-08 01:01:09 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:09.152005 | orchestrator | 2025-09-08 01:01:09 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:09.152113 | orchestrator | 2025-09-08 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:12.203766 | orchestrator | 2025-09-08 01:01:12 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:12.205214 | orchestrator | 2025-09-08 01:01:12 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:12.207692 | orchestrator | 2025-09-08 01:01:12 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:12.209820 | orchestrator | 2025-09-08 01:01:12 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:12.209842 | orchestrator | 2025-09-08 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:15.248721 | orchestrator | 2025-09-08 01:01:15 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:15.253653 | orchestrator | 2025-09-08 01:01:15 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:15.256417 | orchestrator | 2025-09-08 01:01:15 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:15.258203 | orchestrator | 2025-09-08 01:01:15 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:15.258296 | orchestrator | 2025-09-08 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:18.305804 | orchestrator | 2025-09-08 01:01:18 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:18.309044 | orchestrator | 2025-09-08 01:01:18 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:18.311112 | orchestrator | 2025-09-08 01:01:18 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:18.312884 | orchestrator | 2025-09-08 01:01:18 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:18.312908 | orchestrator | 2025-09-08 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:21.344662 | orchestrator | 2025-09-08 01:01:21 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:21.345297 | orchestrator | 2025-09-08 01:01:21 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:21.346391 | orchestrator | 2025-09-08 01:01:21 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:21.347386 | orchestrator | 2025-09-08 01:01:21 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:21.347409 | orchestrator | 2025-09-08 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:24.397681 | orchestrator | 2025-09-08 01:01:24 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:24.397786 | orchestrator | 2025-09-08 01:01:24 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:24.398898 | orchestrator | 2025-09-08 01:01:24 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:24.399569 | orchestrator | 2025-09-08 01:01:24 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:24.399589 | orchestrator | 2025-09-08 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:27.442873 | orchestrator | 2025-09-08 01:01:27 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:27.443091 | orchestrator | 2025-09-08 01:01:27 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:27.443895 | orchestrator | 2025-09-08 01:01:27 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:27.445466 | orchestrator | 2025-09-08 01:01:27 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:27.445489 | orchestrator | 2025-09-08 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:30.497791 | orchestrator | 2025-09-08 01:01:30 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:30.498727 | orchestrator | 2025-09-08 01:01:30 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:30.499604 | orchestrator | 2025-09-08 01:01:30 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:30.501586 | orchestrator | 2025-09-08 01:01:30 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:30.501769 | orchestrator | 2025-09-08 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:33.529008 | orchestrator | 2025-09-08 01:01:33 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:33.529862 | orchestrator | 2025-09-08 01:01:33 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:33.530565 | orchestrator | 2025-09-08 01:01:33 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:33.531750 | orchestrator | 2025-09-08 01:01:33 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:33.531874 | orchestrator | 2025-09-08 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:36.576227 | orchestrator | 2025-09-08 01:01:36 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:36.576607 | orchestrator | 2025-09-08 01:01:36 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:36.577218 | orchestrator | 2025-09-08 01:01:36 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:36.578233 | orchestrator | 2025-09-08 01:01:36 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:36.578347 | orchestrator | 2025-09-08 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:39.611869 | orchestrator | 2025-09-08 01:01:39 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:39.612538 | orchestrator | 2025-09-08 01:01:39 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:39.613063 | orchestrator | 2025-09-08 01:01:39 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:39.614524 | orchestrator | 2025-09-08 01:01:39 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:39.614552 | orchestrator | 2025-09-08 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:42.648648 | orchestrator | 2025-09-08 01:01:42 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:42.649193 | orchestrator | 2025-09-08 01:01:42 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:42.651450 | orchestrator | 2025-09-08 01:01:42 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:42.652606 | orchestrator | 2025-09-08 01:01:42 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:42.652635 | orchestrator | 2025-09-08 01:01:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:45.701080 | orchestrator | 2025-09-08 01:01:45 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:45.701187 | orchestrator | 2025-09-08 01:01:45 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:45.701201 | orchestrator | 2025-09-08 01:01:45 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:45.701212 | orchestrator | 2025-09-08 01:01:45 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:45.701224 | orchestrator | 2025-09-08 01:01:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:48.755566 | orchestrator | 2025-09-08 01:01:48 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:48.756093 | orchestrator | 2025-09-08 01:01:48 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:48.758276 | orchestrator | 2025-09-08 01:01:48 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:48.759483 | orchestrator | 2025-09-08 01:01:48 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:48.759509 | orchestrator | 2025-09-08 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:51.808007 | orchestrator | 2025-09-08 01:01:51 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:51.809972 | orchestrator | 2025-09-08 01:01:51 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:51.815678 | orchestrator | 2025-09-08 01:01:51 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:51.822943 | orchestrator | 2025-09-08 01:01:51 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:51.823501 | orchestrator | 2025-09-08 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:54.854815 | orchestrator | 2025-09-08 01:01:54 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:54.854920 | orchestrator | 2025-09-08 01:01:54 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:54.855645 | orchestrator | 2025-09-08 01:01:54 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:54.856510 | orchestrator | 2025-09-08 01:01:54 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:54.856523 | orchestrator | 2025-09-08 01:01:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:57.900093 | orchestrator | 2025-09-08 01:01:57 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:01:57.902958 | orchestrator | 2025-09-08 01:01:57 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:01:57.903930 | orchestrator | 2025-09-08 01:01:57 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:01:57.905953 | orchestrator | 2025-09-08 01:01:57 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:01:57.905975 | orchestrator | 2025-09-08 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:00.952029 | orchestrator | 2025-09-08 01:02:00 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:00.953309 | orchestrator | 2025-09-08 01:02:00 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:00.954788 | orchestrator | 2025-09-08 01:02:00 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:00.955762 | orchestrator | 2025-09-08 01:02:00 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state STARTED 2025-09-08 01:02:00.955788 | orchestrator | 2025-09-08 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:03.993627 | orchestrator | 2025-09-08 01:02:03 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:03.999257 | orchestrator | 2025-09-08 01:02:03 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:04.003848 | orchestrator | 2025-09-08 01:02:04 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:04.005358 | orchestrator | 2025-09-08 01:02:04 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:04.006971 | orchestrator | 2025-09-08 01:02:04 | INFO  | Task 52f9a046-42d0-4a4f-b982-4eaee421fe63 is in state SUCCESS 2025-09-08 01:02:04.009376 | orchestrator | 2025-09-08 01:02:04.009430 | orchestrator | 2025-09-08 01:02:04.009441 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:02:04.009491 | orchestrator | 2025-09-08 01:02:04.009502 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:02:04.009512 | orchestrator | Monday 08 September 2025 00:59:05 +0000 (0:00:00.354) 0:00:00.354 ****** 2025-09-08 01:02:04.009523 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:02:04.009534 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:02:04.009544 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:02:04.009554 | orchestrator | 2025-09-08 01:02:04.009564 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:02:04.009604 | orchestrator | Monday 08 September 2025 00:59:05 +0000 (0:00:00.388) 0:00:00.742 ****** 2025-09-08 01:02:04.009614 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-08 01:02:04.009625 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-08 01:02:04.009634 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-08 01:02:04.009644 | orchestrator | 2025-09-08 01:02:04.009654 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-08 01:02:04.009664 | orchestrator | 2025-09-08 01:02:04.009674 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:02:04.009683 | orchestrator | Monday 08 September 2025 00:59:06 +0000 (0:00:00.465) 0:00:01.208 ****** 2025-09-08 01:02:04.009693 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:02:04.009704 | orchestrator | 2025-09-08 01:02:04.009714 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-08 01:02:04.009724 | orchestrator | Monday 08 September 2025 00:59:06 +0000 (0:00:00.529) 0:00:01.738 ****** 2025-09-08 01:02:04.009733 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-08 01:02:04.009743 | orchestrator | 2025-09-08 01:02:04.009753 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-08 01:02:04.009762 | orchestrator | Monday 08 September 2025 00:59:10 +0000 (0:00:03.767) 0:00:05.505 ****** 2025-09-08 01:02:04.009772 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-08 01:02:04.009783 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-08 01:02:04.009793 | orchestrator | 2025-09-08 01:02:04.009820 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-08 01:02:04.009830 | orchestrator | Monday 08 September 2025 00:59:16 +0000 (0:00:05.596) 0:00:11.101 ****** 2025-09-08 01:02:04.009840 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-08 01:02:04.009850 | orchestrator | 2025-09-08 01:02:04.009860 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-08 01:02:04.009870 | orchestrator | Monday 08 September 2025 00:59:19 +0000 (0:00:02.775) 0:00:13.877 ****** 2025-09-08 01:02:04.009880 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:02:04.009891 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-08 01:02:04.009901 | orchestrator | 2025-09-08 01:02:04.009911 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-08 01:02:04.009921 | orchestrator | Monday 08 September 2025 00:59:22 +0000 (0:00:03.469) 0:00:17.346 ****** 2025-09-08 01:02:04.009931 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:02:04.009940 | orchestrator | 2025-09-08 01:02:04.009950 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-08 01:02:04.009960 | orchestrator | Monday 08 September 2025 00:59:25 +0000 (0:00:02.807) 0:00:20.154 ****** 2025-09-08 01:02:04.009972 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-08 01:02:04.009984 | orchestrator | 2025-09-08 01:02:04.009996 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-08 01:02:04.010007 | orchestrator | Monday 08 September 2025 00:59:29 +0000 (0:00:03.971) 0:00:24.125 ****** 2025-09-08 01:02:04.010093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.010128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.010144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.010164 | orchestrator | 2025-09-08 01:02:04.010176 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:02:04.010189 | orchestrator | Monday 08 September 2025 00:59:36 +0000 (0:00:07.530) 0:00:31.656 ****** 2025-09-08 01:02:04.010202 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:02:04.010214 | orchestrator | 2025-09-08 01:02:04.010232 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-08 01:02:04.010244 | orchestrator | Monday 08 September 2025 00:59:37 +0000 (0:00:01.168) 0:00:32.824 ****** 2025-09-08 01:02:04.010257 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.010268 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:04.010280 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:04.010292 | orchestrator | 2025-09-08 01:02:04.010305 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-08 01:02:04.010317 | orchestrator | Monday 08 September 2025 00:59:43 +0000 (0:00:05.662) 0:00:38.486 ****** 2025-09-08 01:02:04.010327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:04.010337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:04.010347 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:04.010356 | orchestrator | 2025-09-08 01:02:04.010366 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-08 01:02:04.010376 | orchestrator | Monday 08 September 2025 00:59:45 +0000 (0:00:01.750) 0:00:40.236 ****** 2025-09-08 01:02:04.010402 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:04.010412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:04.010422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:04.010432 | orchestrator | 2025-09-08 01:02:04.010442 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-08 01:02:04.010451 | orchestrator | Monday 08 September 2025 00:59:46 +0000 (0:00:01.237) 0:00:41.474 ****** 2025-09-08 01:02:04.010461 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:02:04.010471 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:02:04.010481 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:02:04.010490 | orchestrator | 2025-09-08 01:02:04.010500 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-08 01:02:04.010525 | orchestrator | Monday 08 September 2025 00:59:47 +0000 (0:00:00.692) 0:00:42.166 ****** 2025-09-08 01:02:04.010535 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.010545 | orchestrator | 2025-09-08 01:02:04.010555 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-08 01:02:04.010564 | orchestrator | Monday 08 September 2025 00:59:47 +0000 (0:00:00.218) 0:00:42.385 ****** 2025-09-08 01:02:04.010574 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.010584 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.010593 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.010603 | orchestrator | 2025-09-08 01:02:04.010613 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:02:04.010622 | orchestrator | Monday 08 September 2025 00:59:47 +0000 (0:00:00.266) 0:00:42.652 ****** 2025-09-08 01:02:04.010639 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:02:04.010649 | orchestrator | 2025-09-08 01:02:04.010659 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-08 01:02:04.010668 | orchestrator | Monday 08 September 2025 00:59:48 +0000 (0:00:00.619) 0:00:43.271 ****** 2025-09-08 01:02:04.010684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.010701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.010713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.010730 | orchestrator | 2025-09-08 01:02:04.010740 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-08 01:02:04.010750 | orchestrator | Monday 08 September 2025 00:59:52 +0000 (0:00:04.481) 0:00:47.753 ****** 2025-09-08 01:02:04.010768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:02:04.010780 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.010796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:02:04.010813 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.010831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:02:04.010842 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.010852 | orchestrator | 2025-09-08 01:02:04.010862 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-08 01:02:04.010871 | orchestrator | Monday 08 September 2025 00:59:57 +0000 (0:00:04.310) 0:00:52.063 ****** 2025-09-08 01:02:04.010886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:02:04.010903 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.010921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:02:04.010932 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.010947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:02:04.010964 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.010974 | orchestrator | 2025-09-08 01:02:04.010984 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-08 01:02:04.010994 | orchestrator | Monday 08 September 2025 01:00:01 +0000 (0:00:03.972) 0:00:56.036 ****** 2025-09-08 01:02:04.011004 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011013 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011023 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011033 | orchestrator | 2025-09-08 01:02:04.011042 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-08 01:02:04.011052 | orchestrator | Monday 08 September 2025 01:00:05 +0000 (0:00:04.325) 0:01:00.361 ****** 2025-09-08 01:02:04.011067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.011083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.011101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.011111 | orchestrator | 2025-09-08 01:02:04.011121 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-08 01:02:04.011131 | orchestrator | Monday 08 September 2025 01:00:10 +0000 (0:00:04.889) 0:01:05.250 ****** 2025-09-08 01:02:04.011140 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:04.011150 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.011160 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:04.011169 | orchestrator | 2025-09-08 01:02:04.011179 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-08 01:02:04.011189 | orchestrator | Monday 08 September 2025 01:00:19 +0000 (0:00:08.971) 0:01:14.222 ****** 2025-09-08 01:02:04.011198 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011208 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011218 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011227 | orchestrator | 2025-09-08 01:02:04.011237 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-08 01:02:04.011252 | orchestrator | Monday 08 September 2025 01:00:23 +0000 (0:00:04.119) 0:01:18.341 ****** 2025-09-08 01:02:04.011262 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011272 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011281 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011291 | orchestrator | 2025-09-08 01:02:04.011301 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-08 01:02:04.011310 | orchestrator | Monday 08 September 2025 01:00:27 +0000 (0:00:04.420) 0:01:22.762 ****** 2025-09-08 01:02:04.011320 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011330 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011345 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011355 | orchestrator | 2025-09-08 01:02:04.011365 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-08 01:02:04.011374 | orchestrator | Monday 08 September 2025 01:00:32 +0000 (0:00:04.479) 0:01:27.241 ****** 2025-09-08 01:02:04.011399 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011409 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011419 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011429 | orchestrator | 2025-09-08 01:02:04.011438 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-08 01:02:04.011448 | orchestrator | Monday 08 September 2025 01:00:36 +0000 (0:00:03.985) 0:01:31.226 ****** 2025-09-08 01:02:04.011458 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011467 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011477 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011486 | orchestrator | 2025-09-08 01:02:04.011496 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-08 01:02:04.011506 | orchestrator | Monday 08 September 2025 01:00:36 +0000 (0:00:00.290) 0:01:31.516 ****** 2025-09-08 01:02:04.011516 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-08 01:02:04.011525 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011535 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-08 01:02:04.011545 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011555 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-08 01:02:04.011564 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011574 | orchestrator | 2025-09-08 01:02:04.011589 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-08 01:02:04.011599 | orchestrator | Monday 08 September 2025 01:00:39 +0000 (0:00:03.344) 0:01:34.860 ****** 2025-09-08 01:02:04.011609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.011630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.011658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:02:04.011669 | orchestrator | 2025-09-08 01:02:04.011679 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:02:04.011688 | orchestrator | Monday 08 September 2025 01:00:44 +0000 (0:00:04.441) 0:01:39.302 ****** 2025-09-08 01:02:04.011698 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:04.011708 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:04.011717 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:04.011727 | orchestrator | 2025-09-08 01:02:04.011736 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-08 01:02:04.011746 | orchestrator | Monday 08 September 2025 01:00:44 +0000 (0:00:00.300) 0:01:39.602 ****** 2025-09-08 01:02:04.011756 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.011765 | orchestrator | 2025-09-08 01:02:04.011775 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-08 01:02:04.011791 | orchestrator | Monday 08 September 2025 01:00:46 +0000 (0:00:02.243) 0:01:41.846 ****** 2025-09-08 01:02:04.011801 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.011811 | orchestrator | 2025-09-08 01:02:04.011820 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-08 01:02:04.011830 | orchestrator | Monday 08 September 2025 01:00:49 +0000 (0:00:02.304) 0:01:44.151 ****** 2025-09-08 01:02:04.011840 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.011849 | orchestrator | 2025-09-08 01:02:04.011859 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-08 01:02:04.011868 | orchestrator | Monday 08 September 2025 01:00:51 +0000 (0:00:02.084) 0:01:46.235 ****** 2025-09-08 01:02:04.011878 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.011888 | orchestrator | 2025-09-08 01:02:04.011897 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-08 01:02:04.011907 | orchestrator | Monday 08 September 2025 01:01:19 +0000 (0:00:28.020) 0:02:14.256 ****** 2025-09-08 01:02:04.011917 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.011927 | orchestrator | 2025-09-08 01:02:04.011942 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-08 01:02:04.011952 | orchestrator | Monday 08 September 2025 01:01:21 +0000 (0:00:02.193) 0:02:16.449 ****** 2025-09-08 01:02:04.011962 | orchestrator | 2025-09-08 01:02:04.011972 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-08 01:02:04.011981 | orchestrator | Monday 08 September 2025 01:01:21 +0000 (0:00:00.061) 0:02:16.511 ****** 2025-09-08 01:02:04.011991 | orchestrator | 2025-09-08 01:02:04.012001 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-08 01:02:04.012011 | orchestrator | Monday 08 September 2025 01:01:21 +0000 (0:00:00.077) 0:02:16.589 ****** 2025-09-08 01:02:04.012020 | orchestrator | 2025-09-08 01:02:04.012030 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-08 01:02:04.012040 | orchestrator | Monday 08 September 2025 01:01:21 +0000 (0:00:00.067) 0:02:16.657 ****** 2025-09-08 01:02:04.012049 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:04.012059 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:04.012069 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:04.012078 | orchestrator | 2025-09-08 01:02:04.012088 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:02:04.012099 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:02:04.012109 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:04.012119 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:04.012129 | orchestrator | 2025-09-08 01:02:04.012138 | orchestrator | 2025-09-08 01:02:04.012148 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:02:04.012158 | orchestrator | Monday 08 September 2025 01:02:01 +0000 (0:00:39.861) 0:02:56.518 ****** 2025-09-08 01:02:04.012168 | orchestrator | =============================================================================== 2025-09-08 01:02:04.012177 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.86s 2025-09-08 01:02:04.012191 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.02s 2025-09-08 01:02:04.012201 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.97s 2025-09-08 01:02:04.012211 | orchestrator | glance : Ensuring config directories exist ------------------------------ 7.53s 2025-09-08 01:02:04.012220 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.66s 2025-09-08 01:02:04.012230 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.60s 2025-09-08 01:02:04.012245 | orchestrator | glance : Copying over config.json files for services -------------------- 4.89s 2025-09-08 01:02:04.012255 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.48s 2025-09-08 01:02:04.012265 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.48s 2025-09-08 01:02:04.012274 | orchestrator | glance : Check glance containers ---------------------------------------- 4.44s 2025-09-08 01:02:04.012284 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.42s 2025-09-08 01:02:04.012294 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.33s 2025-09-08 01:02:04.012304 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.31s 2025-09-08 01:02:04.012313 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.12s 2025-09-08 01:02:04.012323 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.99s 2025-09-08 01:02:04.012332 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.97s 2025-09-08 01:02:04.012342 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.97s 2025-09-08 01:02:04.012352 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.77s 2025-09-08 01:02:04.012361 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.47s 2025-09-08 01:02:04.012371 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.34s 2025-09-08 01:02:04.012407 | orchestrator | 2025-09-08 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:07.074508 | orchestrator | 2025-09-08 01:02:07 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:07.076661 | orchestrator | 2025-09-08 01:02:07 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:07.078123 | orchestrator | 2025-09-08 01:02:07 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:07.079525 | orchestrator | 2025-09-08 01:02:07 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:07.079547 | orchestrator | 2025-09-08 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:10.117250 | orchestrator | 2025-09-08 01:02:10 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:10.118333 | orchestrator | 2025-09-08 01:02:10 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:10.120622 | orchestrator | 2025-09-08 01:02:10 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:10.123025 | orchestrator | 2025-09-08 01:02:10 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:10.123297 | orchestrator | 2025-09-08 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:13.163175 | orchestrator | 2025-09-08 01:02:13 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:13.165746 | orchestrator | 2025-09-08 01:02:13 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:13.168042 | orchestrator | 2025-09-08 01:02:13 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:13.170453 | orchestrator | 2025-09-08 01:02:13 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:13.170823 | orchestrator | 2025-09-08 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:16.250702 | orchestrator | 2025-09-08 01:02:16 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:16.252153 | orchestrator | 2025-09-08 01:02:16 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:16.252219 | orchestrator | 2025-09-08 01:02:16 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:16.253370 | orchestrator | 2025-09-08 01:02:16 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:16.253422 | orchestrator | 2025-09-08 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:19.295101 | orchestrator | 2025-09-08 01:02:19 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:19.297562 | orchestrator | 2025-09-08 01:02:19 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:19.299337 | orchestrator | 2025-09-08 01:02:19 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:19.301924 | orchestrator | 2025-09-08 01:02:19 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:19.301960 | orchestrator | 2025-09-08 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:22.350293 | orchestrator | 2025-09-08 01:02:22 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:22.352199 | orchestrator | 2025-09-08 01:02:22 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:22.353708 | orchestrator | 2025-09-08 01:02:22 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:22.355334 | orchestrator | 2025-09-08 01:02:22 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:22.355545 | orchestrator | 2025-09-08 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:25.402843 | orchestrator | 2025-09-08 01:02:25 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state STARTED 2025-09-08 01:02:25.405828 | orchestrator | 2025-09-08 01:02:25 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:25.408213 | orchestrator | 2025-09-08 01:02:25 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:25.412567 | orchestrator | 2025-09-08 01:02:25 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:25.412589 | orchestrator | 2025-09-08 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:28.453042 | orchestrator | 2025-09-08 01:02:28 | INFO  | Task b90ea05b-a12e-4f6b-a919-6688bba8fc6f is in state SUCCESS 2025-09-08 01:02:28.454779 | orchestrator | 2025-09-08 01:02:28.454825 | orchestrator | 2025-09-08 01:02:28.454838 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:02:28.454850 | orchestrator | 2025-09-08 01:02:28.454862 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:02:28.454874 | orchestrator | Monday 08 September 2025 00:58:57 +0000 (0:00:00.298) 0:00:00.298 ****** 2025-09-08 01:02:28.454885 | orchestrator | ok: [testbed-manager] 2025-09-08 01:02:28.454898 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:02:28.454909 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:02:28.454920 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:02:28.454931 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:02:28.454941 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:02:28.454952 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:02:28.454963 | orchestrator | 2025-09-08 01:02:28.454974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:02:28.454985 | orchestrator | Monday 08 September 2025 00:58:58 +0000 (0:00:00.981) 0:00:01.279 ****** 2025-09-08 01:02:28.454997 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-08 01:02:28.455008 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-08 01:02:28.455019 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-08 01:02:28.455056 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-08 01:02:28.455068 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-08 01:02:28.455078 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-08 01:02:28.455089 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-08 01:02:28.455100 | orchestrator | 2025-09-08 01:02:28.455111 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-08 01:02:28.455121 | orchestrator | 2025-09-08 01:02:28.455132 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-08 01:02:28.455205 | orchestrator | Monday 08 September 2025 00:58:59 +0000 (0:00:00.717) 0:00:01.996 ****** 2025-09-08 01:02:28.455218 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:02:28.455231 | orchestrator | 2025-09-08 01:02:28.455334 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-08 01:02:28.455348 | orchestrator | Monday 08 September 2025 00:59:01 +0000 (0:00:01.751) 0:00:03.747 ****** 2025-09-08 01:02:28.455362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.455421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.455438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.455455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455484 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.455500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:28.455527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.455540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455667 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.455691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.455704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455819 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.455858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455872 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:28.455958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455987 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.455999 | orchestrator | 2025-09-08 01:02:28.456011 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-08 01:02:28.456022 | orchestrator | Monday 08 September 2025 00:59:05 +0000 (0:00:04.087) 0:00:07.835 ****** 2025-09-08 01:02:28.456033 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:02:28.456044 | orchestrator | 2025-09-08 01:02:28.456055 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-08 01:02:28.456066 | orchestrator | Monday 08 September 2025 00:59:07 +0000 (0:00:01.536) 0:00:09.371 ****** 2025-09-08 01:02:28.456077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.456095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.456107 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:28.456118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.456144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.456221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.456234 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.456245 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.456257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.456274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.456286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.456325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456348 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.456371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.456425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.456444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:28.456488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.456534 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.456545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.457940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.457980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.457992 | orchestrator | 2025-09-08 01:02:28.458004 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-08 01:02:28.458063 | orchestrator | Monday 08 September 2025 00:59:12 +0000 (0:00:05.925) 0:00:15.297 ****** 2025-09-08 01:02:28.458078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458194 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 01:02:28.458206 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458230 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 01:02:28.458248 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458335 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.458346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458375 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.458387 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.458432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458444 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.458455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458520 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.458531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458582 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.458596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458643 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.458656 | orchestrator | 2025-09-08 01:02:28.458670 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-08 01:02:28.458683 | orchestrator | Monday 08 September 2025 00:59:14 +0000 (0:00:01.718) 0:00:17.016 ****** 2025-09-08 01:02:28.458696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 01:02:28.458837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458856 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.458870 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 01:02:28.458930 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.458954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.458977 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.458994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459006 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.459017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.459036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.459048 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.459059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.459075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.459087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:28.459109 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.459126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.459138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459167 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.459178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.459190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459218 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.459229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:28.459241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:28.459270 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.459281 | orchestrator | 2025-09-08 01:02:28.459292 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-08 01:02:28.459315 | orchestrator | Monday 08 September 2025 00:59:16 +0000 (0:00:02.125) 0:00:19.141 ****** 2025-09-08 01:02:28.459326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.459337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.459348 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:28.459365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.459376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.459387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.459436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.459468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459479 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.459491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459649 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:28.459668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.459703 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.459764 | orchestrator | 2025-09-08 01:02:28.459782 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-08 01:02:28.459800 | orchestrator | Monday 08 September 2025 00:59:22 +0000 (0:00:05.749) 0:00:24.891 ****** 2025-09-08 01:02:28.459820 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:28.459840 | orchestrator | 2025-09-08 01:02:28.459858 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-08 01:02:28.459881 | orchestrator | Monday 08 September 2025 00:59:23 +0000 (0:00:01.038) 0:00:25.929 ****** 2025-09-08 01:02:28.459894 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1912407, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.304143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.459906 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1912407, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.304143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.459918 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1912407, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.304143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.459930 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1912417, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.309227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.459948 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1912407, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.304143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.459959 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1912407, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.304143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.459984 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1912417, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.309227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.459996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1912417, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.309227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460007 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1912417, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.309227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460018 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1912407, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.304143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460029 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1912405, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3025727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460045 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1912407, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.304143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460064 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1912405, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3025727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460080 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1912405, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3025727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460092 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1912417, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.309227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460103 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1912413, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460114 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1912405, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3025727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1912417, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.309227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460141 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1912413, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460159 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1912405, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3025727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460177 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1912403, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.301448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460189 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1912413, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460200 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1912417, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.309227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.460211 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1912408, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3045762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460222 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1912413, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460241 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1912403, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.301448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460259 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1912405, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3025727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460271 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1912413, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1912412, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460438 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1912403, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.301448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460449 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1912409, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3048854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460460 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1912403, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.301448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460478 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1912408, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3045762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460498 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1912408, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3045762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460509 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1912403, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.301448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460527 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1912405, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3025727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.460539 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1912412, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460550 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1912406, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3034608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460561 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1912408, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3045762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460584 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1912413, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1912409, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3048854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460607 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1912412, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460624 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1912408, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3045762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460636 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1912412, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460647 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1912403, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.301448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460658 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912416, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3087788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460681 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1912406, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3034608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460693 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1912413, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.460704 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912401, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3006377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460720 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1912409, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3048854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460732 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1912409, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3048854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460743 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1912408, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3045762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912416, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3087788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460777 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1912412, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460788 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1912423, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460800 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1912406, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3034608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460825 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1912412, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460845 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912401, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3006377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460866 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1912409, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3048854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460886 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1912406, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3034608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460925 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1912415, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.308196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460938 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1912403, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.301448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.460949 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1912423, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460967 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912416, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3087788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460978 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1912409, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3048854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.460989 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1912415, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.308196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461007 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912416, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3087788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461023 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912404, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3022132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461037 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912401, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3006377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461051 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1912406, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3034608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461070 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912401, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3006377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461083 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1912406, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3034608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461095 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1912402, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3011312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461114 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1912408, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3045762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.461136 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912404, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3022132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461150 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1912423, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461164 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912416, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3087788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461184 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1912423, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461197 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1912402, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3011312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461210 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912416, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3087788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461230 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1912411, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3055182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461249 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1912411, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3055182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461263 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1912415, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.308196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461275 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1912415, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.308196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461296 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912401, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3006377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461310 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1912412, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.306448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.461329 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912401, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3006377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461343 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912404, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3022132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461361 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1912410, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3051684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461376 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1912423, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461506 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1912410, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3051684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461753 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912404, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3022132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461775 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1912423, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461821 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1912409, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3048854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.461834 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1912402, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3011312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1912415, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.308196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461879 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1912422, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461891 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.461906 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1912422, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461928 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1912415, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.308196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461939 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.461951 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1912402, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3011312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461969 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912404, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3022132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461981 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1912411, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3055182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.461998 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1912410, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3051684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462009 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912404, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3022132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462153 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1912402, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3011312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462180 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1912411, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3055182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462201 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1912422, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462213 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.462224 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1912406, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3034608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462236 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1912402, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3011312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462254 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1912410, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3051684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462266 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1912411, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3055182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462277 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1912422, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462289 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.462309 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1912411, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3055182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462328 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1912410, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3051684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462339 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912416, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3087788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462351 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1912410, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3051684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462367 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1912422, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462379 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.462421 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1912422, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:28.462433 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.462444 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912401, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3006377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462473 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1912423, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462485 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1912415, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.308196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462497 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1912404, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3022132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462508 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1912402, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3011312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462525 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1912411, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3055182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462537 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1912410, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.3051684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462548 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1912422, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.311168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:28.462575 | orchestrator | 2025-09-08 01:02:28.462588 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-08 01:02:28.462601 | orchestrator | Monday 08 September 2025 00:59:51 +0000 (0:00:27.568) 0:00:53.497 ****** 2025-09-08 01:02:28.462612 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:28.462623 | orchestrator | 2025-09-08 01:02:28.462641 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-08 01:02:28.462652 | orchestrator | Monday 08 September 2025 00:59:51 +0000 (0:00:00.721) 0:00:54.218 ****** 2025-09-08 01:02:28.462663 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.462676 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462688 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:28.462700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462711 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-08 01:02:28.462722 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:28.462733 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.462744 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462755 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:28.462766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462776 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-08 01:02:28.462790 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 01:02:28.462800 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.462811 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462822 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:28.462833 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462843 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-08 01:02:28.462854 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.462865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462876 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:28.462887 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462897 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-08 01:02:28.462908 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.462919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462930 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:28.462941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462951 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-08 01:02:28.462962 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.462973 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.462984 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:28.462995 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.463005 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-08 01:02:28.463016 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.463027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.463037 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:28.463048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:28.463071 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-08 01:02:28.463083 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:02:28.463093 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:02:28.463104 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 01:02:28.463115 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 01:02:28.463126 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 01:02:28.463137 | orchestrator | 2025-09-08 01:02:28.463147 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-08 01:02:28.463158 | orchestrator | Monday 08 September 2025 00:59:54 +0000 (0:00:02.540) 0:00:56.758 ****** 2025-09-08 01:02:28.463170 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:28.463182 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.463192 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:28.463203 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.463214 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:28.463225 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.463236 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:28.463246 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.463257 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:28.463268 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.463279 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:28.463290 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.463301 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-08 01:02:28.463312 | orchestrator | 2025-09-08 01:02:28.463322 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-08 01:02:28.463333 | orchestrator | Monday 08 September 2025 01:00:13 +0000 (0:00:19.453) 0:01:16.212 ****** 2025-09-08 01:02:28.463344 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:28.463362 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:28.463373 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.463384 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.463422 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:28.463434 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.463445 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:28.463456 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.463467 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:28.463478 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.463489 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:28.463499 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.463510 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-08 01:02:28.463521 | orchestrator | 2025-09-08 01:02:28.463532 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-08 01:02:28.463543 | orchestrator | Monday 08 September 2025 01:00:18 +0000 (0:00:04.926) 0:01:21.139 ****** 2025-09-08 01:02:28.463565 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:28.463586 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:28.463597 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:28.463608 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-08 01:02:28.463619 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:28.463630 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.463641 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.463651 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.463662 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.463673 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:28.463684 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.463695 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:28.463706 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.463717 | orchestrator | 2025-09-08 01:02:28.463728 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-08 01:02:28.463739 | orchestrator | Monday 08 September 2025 01:00:21 +0000 (0:00:02.712) 0:01:23.851 ****** 2025-09-08 01:02:28.463749 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:28.463760 | orchestrator | 2025-09-08 01:02:28.463776 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-08 01:02:28.463788 | orchestrator | Monday 08 September 2025 01:00:22 +0000 (0:00:00.639) 0:01:24.491 ****** 2025-09-08 01:02:28.463798 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.463809 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.463820 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.463831 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.463842 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.463852 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.463863 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.463874 | orchestrator | 2025-09-08 01:02:28.463885 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-08 01:02:28.463896 | orchestrator | Monday 08 September 2025 01:00:22 +0000 (0:00:00.721) 0:01:25.213 ****** 2025-09-08 01:02:28.463906 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.463917 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.463928 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.463939 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.463950 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:28.463961 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:28.463972 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:28.463982 | orchestrator | 2025-09-08 01:02:28.463993 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-08 01:02:28.464004 | orchestrator | Monday 08 September 2025 01:00:25 +0000 (0:00:02.663) 0:01:27.876 ****** 2025-09-08 01:02:28.464015 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:28.464026 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.464037 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:28.464048 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.464059 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:28.464070 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.464081 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:28.464098 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.464109 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:28.464127 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.464138 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:28.464148 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.464159 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:28.464170 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.464181 | orchestrator | 2025-09-08 01:02:28.464192 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-08 01:02:28.464203 | orchestrator | Monday 08 September 2025 01:00:27 +0000 (0:00:02.106) 0:01:29.983 ****** 2025-09-08 01:02:28.464213 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:28.464225 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.464236 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:28.464247 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.464257 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:28.464269 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.464279 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:28.464290 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.464301 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:28.464312 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.464323 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-08 01:02:28.464334 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:28.464345 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.464356 | orchestrator | 2025-09-08 01:02:28.464367 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-08 01:02:28.464378 | orchestrator | Monday 08 September 2025 01:00:29 +0000 (0:00:02.234) 0:01:32.218 ****** 2025-09-08 01:02:28.464388 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:28.464420 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-08 01:02:28.464431 | orchestrator | due to this access issue: 2025-09-08 01:02:28.464442 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-08 01:02:28.464454 | orchestrator | not a directory 2025-09-08 01:02:28.464465 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:28.464475 | orchestrator | 2025-09-08 01:02:28.464486 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-08 01:02:28.464497 | orchestrator | Monday 08 September 2025 01:00:31 +0000 (0:00:01.163) 0:01:33.382 ****** 2025-09-08 01:02:28.464508 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.464519 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.464529 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.464540 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.464551 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.464562 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.464578 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.464589 | orchestrator | 2025-09-08 01:02:28.464600 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-08 01:02:28.464611 | orchestrator | Monday 08 September 2025 01:00:31 +0000 (0:00:00.764) 0:01:34.147 ****** 2025-09-08 01:02:28.464628 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.464639 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:28.464650 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:28.464661 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:28.464672 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:28.464682 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:28.464693 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:28.464704 | orchestrator | 2025-09-08 01:02:28.464715 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-08 01:02:28.464726 | orchestrator | Monday 08 September 2025 01:00:32 +0000 (0:00:00.716) 0:01:34.863 ****** 2025-09-08 01:02:28.464738 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:28.464760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.464774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.464785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.464796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.464808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.464832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.464844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.464856 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.464874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:28.464886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.464899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.464911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.464923 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.464946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.464958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.464969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.464987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.465000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.465011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.465024 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:28.465055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.465068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.465079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.465097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.465109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.465120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:28.465132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.465149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:28.465161 | orchestrator | 2025-09-08 01:02:28.465172 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-08 01:02:28.465191 | orchestrator | Monday 08 September 2025 01:00:37 +0000 (0:00:04.458) 0:01:39.322 ****** 2025-09-08 01:02:28.465203 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-08 01:02:28.465214 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:28.465225 | orchestrator | 2025-09-08 01:02:28.465235 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:28.465246 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:01.255) 0:01:40.578 ****** 2025-09-08 01:02:28.465257 | orchestrator | 2025-09-08 01:02:28.465268 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:28.465279 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:00.068) 0:01:40.646 ****** 2025-09-08 01:02:28.465290 | orchestrator | 2025-09-08 01:02:28.465301 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:28.465312 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:00.070) 0:01:40.717 ****** 2025-09-08 01:02:28.465322 | orchestrator | 2025-09-08 01:02:28.465333 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:28.465344 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:00.070) 0:01:40.787 ****** 2025-09-08 01:02:28.465355 | orchestrator | 2025-09-08 01:02:28.465366 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:28.465377 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:00.252) 0:01:41.039 ****** 2025-09-08 01:02:28.465387 | orchestrator | 2025-09-08 01:02:28.465447 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:28.465458 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:00.080) 0:01:41.120 ****** 2025-09-08 01:02:28.465469 | orchestrator | 2025-09-08 01:02:28.465480 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:28.465491 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:00.067) 0:01:41.188 ****** 2025-09-08 01:02:28.465502 | orchestrator | 2025-09-08 01:02:28.465513 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-08 01:02:28.465524 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:00.090) 0:01:41.278 ****** 2025-09-08 01:02:28.465535 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:28.465546 | orchestrator | 2025-09-08 01:02:28.465557 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-08 01:02:28.465575 | orchestrator | Monday 08 September 2025 01:00:58 +0000 (0:00:19.106) 0:02:00.385 ****** 2025-09-08 01:02:28.465586 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:28.465597 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:28.465608 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:28.465619 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:28.465629 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:28.465640 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:28.465651 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:28.465671 | orchestrator | 2025-09-08 01:02:28.465682 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-08 01:02:28.465693 | orchestrator | Monday 08 September 2025 01:01:11 +0000 (0:00:13.837) 0:02:14.223 ****** 2025-09-08 01:02:28.465704 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:28.465714 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:28.465725 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:28.465736 | orchestrator | 2025-09-08 01:02:28.465747 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-08 01:02:28.465758 | orchestrator | Monday 08 September 2025 01:01:23 +0000 (0:00:11.690) 0:02:25.913 ****** 2025-09-08 01:02:28.465769 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:28.465779 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:28.465790 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:28.465801 | orchestrator | 2025-09-08 01:02:28.465812 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-08 01:02:28.465823 | orchestrator | Monday 08 September 2025 01:01:35 +0000 (0:00:12.348) 0:02:38.262 ****** 2025-09-08 01:02:28.465833 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:28.465844 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:28.465855 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:28.465866 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:28.465877 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:28.465887 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:28.465898 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:28.465909 | orchestrator | 2025-09-08 01:02:28.465920 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-08 01:02:28.465931 | orchestrator | Monday 08 September 2025 01:01:53 +0000 (0:00:17.080) 0:02:55.342 ****** 2025-09-08 01:02:28.465942 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:28.465952 | orchestrator | 2025-09-08 01:02:28.465963 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-08 01:02:28.465974 | orchestrator | Monday 08 September 2025 01:02:01 +0000 (0:00:08.228) 0:03:03.570 ****** 2025-09-08 01:02:28.465985 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:28.465996 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:28.466006 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:28.466048 | orchestrator | 2025-09-08 01:02:28.466061 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-08 01:02:28.466072 | orchestrator | Monday 08 September 2025 01:02:10 +0000 (0:00:09.663) 0:03:13.234 ****** 2025-09-08 01:02:28.466083 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:28.466094 | orchestrator | 2025-09-08 01:02:28.466105 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-08 01:02:28.466116 | orchestrator | Monday 08 September 2025 01:02:15 +0000 (0:00:05.032) 0:03:18.266 ****** 2025-09-08 01:02:28.466127 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:28.466138 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:28.466148 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:28.466159 | orchestrator | 2025-09-08 01:02:28.466170 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:02:28.466181 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:02:28.466200 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:28.466212 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:28.466223 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:28.466234 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:02:28.466252 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:02:28.466263 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:02:28.466274 | orchestrator | 2025-09-08 01:02:28.466285 | orchestrator | 2025-09-08 01:02:28.466296 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:02:28.466307 | orchestrator | Monday 08 September 2025 01:02:27 +0000 (0:00:11.899) 0:03:30.166 ****** 2025-09-08 01:02:28.466318 | orchestrator | =============================================================================== 2025-09-08 01:02:28.466328 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.57s 2025-09-08 01:02:28.466339 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.45s 2025-09-08 01:02:28.466350 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.11s 2025-09-08 01:02:28.466361 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.08s 2025-09-08 01:02:28.466372 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.84s 2025-09-08 01:02:28.466389 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.35s 2025-09-08 01:02:28.466455 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.90s 2025-09-08 01:02:28.466466 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.69s 2025-09-08 01:02:28.466477 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.66s 2025-09-08 01:02:28.466488 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.23s 2025-09-08 01:02:28.466499 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.93s 2025-09-08 01:02:28.466510 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.75s 2025-09-08 01:02:28.466521 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.03s 2025-09-08 01:02:28.466532 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.93s 2025-09-08 01:02:28.466543 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.46s 2025-09-08 01:02:28.466554 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.09s 2025-09-08 01:02:28.466565 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.71s 2025-09-08 01:02:28.466576 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.66s 2025-09-08 01:02:28.466587 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.54s 2025-09-08 01:02:28.466598 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.23s 2025-09-08 01:02:28.466609 | orchestrator | 2025-09-08 01:02:28 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:28.466620 | orchestrator | 2025-09-08 01:02:28 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:28.466631 | orchestrator | 2025-09-08 01:02:28 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:28.466642 | orchestrator | 2025-09-08 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:31.502453 | orchestrator | 2025-09-08 01:02:31 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:31.503701 | orchestrator | 2025-09-08 01:02:31 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:31.505586 | orchestrator | 2025-09-08 01:02:31 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:31.507538 | orchestrator | 2025-09-08 01:02:31 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:31.507884 | orchestrator | 2025-09-08 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:34.557792 | orchestrator | 2025-09-08 01:02:34 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:34.560259 | orchestrator | 2025-09-08 01:02:34 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:34.561498 | orchestrator | 2025-09-08 01:02:34 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:34.562801 | orchestrator | 2025-09-08 01:02:34 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:34.562823 | orchestrator | 2025-09-08 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:37.605695 | orchestrator | 2025-09-08 01:02:37 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:37.606467 | orchestrator | 2025-09-08 01:02:37 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:37.608204 | orchestrator | 2025-09-08 01:02:37 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:37.609574 | orchestrator | 2025-09-08 01:02:37 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:37.609595 | orchestrator | 2025-09-08 01:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:40.652100 | orchestrator | 2025-09-08 01:02:40 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:40.652207 | orchestrator | 2025-09-08 01:02:40 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:40.652984 | orchestrator | 2025-09-08 01:02:40 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:40.654762 | orchestrator | 2025-09-08 01:02:40 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:40.654784 | orchestrator | 2025-09-08 01:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:43.697006 | orchestrator | 2025-09-08 01:02:43 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:43.698892 | orchestrator | 2025-09-08 01:02:43 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:43.702278 | orchestrator | 2025-09-08 01:02:43 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:43.705233 | orchestrator | 2025-09-08 01:02:43 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:43.705330 | orchestrator | 2025-09-08 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:46.740676 | orchestrator | 2025-09-08 01:02:46 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:46.740788 | orchestrator | 2025-09-08 01:02:46 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:46.740935 | orchestrator | 2025-09-08 01:02:46 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:46.743630 | orchestrator | 2025-09-08 01:02:46 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:46.743646 | orchestrator | 2025-09-08 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:49.780707 | orchestrator | 2025-09-08 01:02:49 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:49.780830 | orchestrator | 2025-09-08 01:02:49 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:49.781353 | orchestrator | 2025-09-08 01:02:49 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:49.782056 | orchestrator | 2025-09-08 01:02:49 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:49.782080 | orchestrator | 2025-09-08 01:02:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:52.816141 | orchestrator | 2025-09-08 01:02:52 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:52.816328 | orchestrator | 2025-09-08 01:02:52 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:52.817130 | orchestrator | 2025-09-08 01:02:52 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:52.817873 | orchestrator | 2025-09-08 01:02:52 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:52.817892 | orchestrator | 2025-09-08 01:02:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:55.858489 | orchestrator | 2025-09-08 01:02:55 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:55.858795 | orchestrator | 2025-09-08 01:02:55 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:55.860832 | orchestrator | 2025-09-08 01:02:55 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:55.861788 | orchestrator | 2025-09-08 01:02:55 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:55.861852 | orchestrator | 2025-09-08 01:02:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:58.899730 | orchestrator | 2025-09-08 01:02:58 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:02:58.899950 | orchestrator | 2025-09-08 01:02:58 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state STARTED 2025-09-08 01:02:58.901100 | orchestrator | 2025-09-08 01:02:58 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:02:58.902337 | orchestrator | 2025-09-08 01:02:58 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:02:58.902544 | orchestrator | 2025-09-08 01:02:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:01.941596 | orchestrator | 2025-09-08 01:03:01 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:01.942856 | orchestrator | 2025-09-08 01:03:01 | INFO  | Task b390329d-871c-4085-a11a-9041c65d3cc9 is in state SUCCESS 2025-09-08 01:03:01.944589 | orchestrator | 2025-09-08 01:03:01.944627 | orchestrator | 2025-09-08 01:03:01.944639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:03:01.944651 | orchestrator | 2025-09-08 01:03:01.944663 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:03:01.944675 | orchestrator | Monday 08 September 2025 00:59:10 +0000 (0:00:00.290) 0:00:00.290 ****** 2025-09-08 01:03:01.944687 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:03:01.944729 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:03:01.944742 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:03:01.944873 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:03:01.944885 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:03:01.944896 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:03:01.944928 | orchestrator | 2025-09-08 01:03:01.944939 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:03:01.944951 | orchestrator | Monday 08 September 2025 00:59:10 +0000 (0:00:00.825) 0:00:01.116 ****** 2025-09-08 01:03:01.944962 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-08 01:03:01.945004 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-08 01:03:01.945016 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-08 01:03:01.945027 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-08 01:03:01.945038 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-08 01:03:01.945049 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-08 01:03:01.945059 | orchestrator | 2025-09-08 01:03:01.945071 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-08 01:03:01.945083 | orchestrator | 2025-09-08 01:03:01.945094 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:03:01.945105 | orchestrator | Monday 08 September 2025 00:59:11 +0000 (0:00:00.588) 0:00:01.704 ****** 2025-09-08 01:03:01.945117 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:03:01.945130 | orchestrator | 2025-09-08 01:03:01.945141 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-08 01:03:01.945152 | orchestrator | Monday 08 September 2025 00:59:12 +0000 (0:00:01.048) 0:00:02.753 ****** 2025-09-08 01:03:01.945164 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-08 01:03:01.945177 | orchestrator | 2025-09-08 01:03:01.945191 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-08 01:03:01.945203 | orchestrator | Monday 08 September 2025 00:59:15 +0000 (0:00:02.987) 0:00:05.740 ****** 2025-09-08 01:03:01.945217 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-08 01:03:01.945230 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-08 01:03:01.945243 | orchestrator | 2025-09-08 01:03:01.945256 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-08 01:03:01.945282 | orchestrator | Monday 08 September 2025 00:59:21 +0000 (0:00:05.917) 0:00:11.658 ****** 2025-09-08 01:03:01.945296 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:03:01.945308 | orchestrator | 2025-09-08 01:03:01.945322 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-08 01:03:01.945334 | orchestrator | Monday 08 September 2025 00:59:24 +0000 (0:00:02.914) 0:00:14.573 ****** 2025-09-08 01:03:01.945347 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:03:01.945361 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-08 01:03:01.945408 | orchestrator | 2025-09-08 01:03:01.945475 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-08 01:03:01.945489 | orchestrator | Monday 08 September 2025 00:59:28 +0000 (0:00:03.688) 0:00:18.261 ****** 2025-09-08 01:03:01.945501 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:03:01.945515 | orchestrator | 2025-09-08 01:03:01.945529 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-08 01:03:01.945539 | orchestrator | Monday 08 September 2025 00:59:31 +0000 (0:00:03.612) 0:00:21.874 ****** 2025-09-08 01:03:01.945550 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-08 01:03:01.945579 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-08 01:03:01.945591 | orchestrator | 2025-09-08 01:03:01.945602 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-08 01:03:01.945613 | orchestrator | Monday 08 September 2025 00:59:39 +0000 (0:00:07.842) 0:00:29.716 ****** 2025-09-08 01:03:01.945627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.945673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.945687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945714 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.945760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.945844 | orchestrator | 2025-09-08 01:03:01.945861 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:03:01.945873 | orchestrator | Monday 08 September 2025 00:59:42 +0000 (0:00:02.986) 0:00:32.703 ****** 2025-09-08 01:03:01.945884 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.945895 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.945906 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.945917 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.945928 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.945939 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.945949 | orchestrator | 2025-09-08 01:03:01.945960 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:03:01.945971 | orchestrator | Monday 08 September 2025 00:59:43 +0000 (0:00:00.581) 0:00:33.284 ****** 2025-09-08 01:03:01.945982 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.945993 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.946004 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.946062 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:03:01.946076 | orchestrator | 2025-09-08 01:03:01.946088 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-08 01:03:01.946099 | orchestrator | Monday 08 September 2025 00:59:43 +0000 (0:00:00.842) 0:00:34.127 ****** 2025-09-08 01:03:01.946110 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-08 01:03:01.946121 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-08 01:03:01.946132 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-08 01:03:01.946142 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-08 01:03:01.946153 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-08 01:03:01.946164 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-08 01:03:01.946175 | orchestrator | 2025-09-08 01:03:01.946186 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-08 01:03:01.946197 | orchestrator | Monday 08 September 2025 00:59:45 +0000 (0:00:01.792) 0:00:35.920 ****** 2025-09-08 01:03:01.946210 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:03:01.946237 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:03:01.946251 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:03:01.946271 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:03:01.946283 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:03:01.946294 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:03:01.946313 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:03:01.946333 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:03:01.946351 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:03:01.946363 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:03:01.946376 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:03:01.946399 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:03:01.946411 | orchestrator | 2025-09-08 01:03:01.946441 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-08 01:03:01.946453 | orchestrator | Monday 08 September 2025 00:59:49 +0000 (0:00:03.498) 0:00:39.419 ****** 2025-09-08 01:03:01.946464 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:03:01.946475 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:03:01.946486 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:03:01.946497 | orchestrator | 2025-09-08 01:03:01.946508 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-08 01:03:01.946519 | orchestrator | Monday 08 September 2025 00:59:51 +0000 (0:00:02.432) 0:00:41.852 ****** 2025-09-08 01:03:01.946530 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-08 01:03:01.946541 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-08 01:03:01.946552 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-08 01:03:01.946562 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 01:03:01.946573 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 01:03:01.946590 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 01:03:01.946602 | orchestrator | 2025-09-08 01:03:01.946613 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-08 01:03:01.946623 | orchestrator | Monday 08 September 2025 00:59:55 +0000 (0:00:03.801) 0:00:45.654 ****** 2025-09-08 01:03:01.946634 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-08 01:03:01.946645 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-08 01:03:01.946656 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-08 01:03:01.946667 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-08 01:03:01.946677 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-08 01:03:01.946688 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-08 01:03:01.946698 | orchestrator | 2025-09-08 01:03:01.946709 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-08 01:03:01.946720 | orchestrator | Monday 08 September 2025 00:59:56 +0000 (0:00:01.426) 0:00:47.081 ****** 2025-09-08 01:03:01.946731 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.946742 | orchestrator | 2025-09-08 01:03:01.946752 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-08 01:03:01.946763 | orchestrator | Monday 08 September 2025 00:59:56 +0000 (0:00:00.119) 0:00:47.200 ****** 2025-09-08 01:03:01.946774 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.946785 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.946829 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.946839 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.946867 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.946878 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.946889 | orchestrator | 2025-09-08 01:03:01.946900 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:03:01.946910 | orchestrator | Monday 08 September 2025 00:59:57 +0000 (0:00:00.530) 0:00:47.731 ****** 2025-09-08 01:03:01.946923 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:03:01.946935 | orchestrator | 2025-09-08 01:03:01.946946 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-08 01:03:01.946957 | orchestrator | Monday 08 September 2025 00:59:59 +0000 (0:00:01.631) 0:00:49.362 ****** 2025-09-08 01:03:01.946969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.946986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.947007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.947050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947152 | orchestrator | 2025-09-08 01:03:01.947163 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-08 01:03:01.947175 | orchestrator | Monday 08 September 2025 01:00:02 +0000 (0:00:03.276) 0:00:52.639 ****** 2025-09-08 01:03:01.947191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.947209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.947241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.947264 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.947280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947292 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.947303 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.947314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947351 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.947363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947385 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.947401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947442 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.947453 | orchestrator | 2025-09-08 01:03:01.947464 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-08 01:03:01.947482 | orchestrator | Monday 08 September 2025 01:00:04 +0000 (0:00:01.799) 0:00:54.438 ****** 2025-09-08 01:03:01.947501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.947513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947525 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.947536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.947548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947559 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.947576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.947604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947615 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.947627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947649 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.947661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947696 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.947714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.947737 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.947748 | orchestrator | 2025-09-08 01:03:01.947759 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-08 01:03:01.947770 | orchestrator | Monday 08 September 2025 01:00:06 +0000 (0:00:02.172) 0:00:56.611 ****** 2025-09-08 01:03:01.947781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.947793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.947813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.947839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.947961 | orchestrator | 2025-09-08 01:03:01.947972 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-08 01:03:01.947983 | orchestrator | Monday 08 September 2025 01:00:09 +0000 (0:00:03.590) 0:01:00.201 ****** 2025-09-08 01:03:01.947994 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-08 01:03:01.948005 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.948016 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-08 01:03:01.948027 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.948038 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-08 01:03:01.948049 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-08 01:03:01.948067 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-08 01:03:01.948077 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.948088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-08 01:03:01.948099 | orchestrator | 2025-09-08 01:03:01.948110 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-08 01:03:01.948126 | orchestrator | Monday 08 September 2025 01:00:11 +0000 (0:00:02.024) 0:01:02.226 ****** 2025-09-08 01:03:01.948137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.948157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.948222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.948234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948314 | orchestrator | 2025-09-08 01:03:01.948325 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-08 01:03:01.948336 | orchestrator | Monday 08 September 2025 01:00:22 +0000 (0:00:10.714) 0:01:12.940 ****** 2025-09-08 01:03:01.948353 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.948364 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.948375 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.948386 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:03:01.948397 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:03:01.948407 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:03:01.948447 | orchestrator | 2025-09-08 01:03:01.948458 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-08 01:03:01.948469 | orchestrator | Monday 08 September 2025 01:00:24 +0000 (0:00:02.236) 0:01:15.177 ****** 2025-09-08 01:03:01.948480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.948492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948510 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.948521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.948538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948550 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.948568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:03:01.948580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948591 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.948602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948641 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.948658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948681 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.948700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:03:01.948730 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.948741 | orchestrator | 2025-09-08 01:03:01.948752 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-08 01:03:01.948763 | orchestrator | Monday 08 September 2025 01:00:27 +0000 (0:00:02.168) 0:01:17.345 ****** 2025-09-08 01:03:01.948774 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.948785 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.948795 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.948806 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.948817 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.948828 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.948838 | orchestrator | 2025-09-08 01:03:01.948850 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-08 01:03:01.948860 | orchestrator | Monday 08 September 2025 01:00:27 +0000 (0:00:00.605) 0:01:17.950 ****** 2025-09-08 01:03:01.948871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.948892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.948911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:03:01.948941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.948991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.949003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.949021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.949032 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.949044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:03:01.949055 | orchestrator | 2025-09-08 01:03:01.949066 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:03:01.949082 | orchestrator | Monday 08 September 2025 01:00:30 +0000 (0:00:03.007) 0:01:20.958 ****** 2025-09-08 01:03:01.949093 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.949104 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:03:01.949114 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:03:01.949125 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:03:01.949136 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:03:01.949146 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:03:01.949157 | orchestrator | 2025-09-08 01:03:01.949168 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-08 01:03:01.949179 | orchestrator | Monday 08 September 2025 01:00:31 +0000 (0:00:00.744) 0:01:21.703 ****** 2025-09-08 01:03:01.949189 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:03:01.949200 | orchestrator | 2025-09-08 01:03:01.949211 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-08 01:03:01.949222 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:02.690) 0:01:24.393 ****** 2025-09-08 01:03:01.949232 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:03:01.949243 | orchestrator | 2025-09-08 01:03:01.949254 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-08 01:03:01.949265 | orchestrator | Monday 08 September 2025 01:00:36 +0000 (0:00:02.384) 0:01:26.777 ****** 2025-09-08 01:03:01.949275 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:03:01.949286 | orchestrator | 2025-09-08 01:03:01.949297 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:03:01.949314 | orchestrator | Monday 08 September 2025 01:00:54 +0000 (0:00:18.364) 0:01:45.142 ****** 2025-09-08 01:03:01.949325 | orchestrator | 2025-09-08 01:03:01.949482 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:03:01.949498 | orchestrator | Monday 08 September 2025 01:00:54 +0000 (0:00:00.089) 0:01:45.231 ****** 2025-09-08 01:03:01.949509 | orchestrator | 2025-09-08 01:03:01.949520 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:03:01.949531 | orchestrator | Monday 08 September 2025 01:00:55 +0000 (0:00:00.095) 0:01:45.327 ****** 2025-09-08 01:03:01.949542 | orchestrator | 2025-09-08 01:03:01.949553 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:03:01.949563 | orchestrator | Monday 08 September 2025 01:00:55 +0000 (0:00:00.098) 0:01:45.426 ****** 2025-09-08 01:03:01.949574 | orchestrator | 2025-09-08 01:03:01.949585 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:03:01.949596 | orchestrator | Monday 08 September 2025 01:00:55 +0000 (0:00:00.118) 0:01:45.545 ****** 2025-09-08 01:03:01.949606 | orchestrator | 2025-09-08 01:03:01.949617 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:03:01.949628 | orchestrator | Monday 08 September 2025 01:00:55 +0000 (0:00:00.102) 0:01:45.647 ****** 2025-09-08 01:03:01.949639 | orchestrator | 2025-09-08 01:03:01.949649 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-08 01:03:01.949660 | orchestrator | Monday 08 September 2025 01:00:55 +0000 (0:00:00.085) 0:01:45.733 ****** 2025-09-08 01:03:01.949671 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:03:01.949682 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:03:01.949697 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:03:01.949708 | orchestrator | 2025-09-08 01:03:01.949719 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-08 01:03:01.949730 | orchestrator | Monday 08 September 2025 01:01:23 +0000 (0:00:28.172) 0:02:13.905 ****** 2025-09-08 01:03:01.949740 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:03:01.949751 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:03:01.949762 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:03:01.949772 | orchestrator | 2025-09-08 01:03:01.949783 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-08 01:03:01.949794 | orchestrator | Monday 08 September 2025 01:01:36 +0000 (0:00:12.576) 0:02:26.481 ****** 2025-09-08 01:03:01.949805 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:03:01.949815 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:03:01.949826 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:03:01.949837 | orchestrator | 2025-09-08 01:03:01.949847 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-08 01:03:01.949858 | orchestrator | Monday 08 September 2025 01:02:47 +0000 (0:01:11.299) 0:03:37.781 ****** 2025-09-08 01:03:01.949869 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:03:01.949880 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:03:01.949891 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:03:01.949901 | orchestrator | 2025-09-08 01:03:01.949912 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-08 01:03:01.949923 | orchestrator | Monday 08 September 2025 01:02:58 +0000 (0:00:11.385) 0:03:49.166 ****** 2025-09-08 01:03:01.949934 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:03:01.949944 | orchestrator | 2025-09-08 01:03:01.949955 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:03:01.949966 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:03:01.949979 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-08 01:03:01.949990 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-08 01:03:01.950009 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:03:01.950054 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:03:01.950066 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:03:01.950077 | orchestrator | 2025-09-08 01:03:01.950088 | orchestrator | 2025-09-08 01:03:01.950099 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:03:01.950110 | orchestrator | Monday 08 September 2025 01:03:00 +0000 (0:00:01.195) 0:03:50.362 ****** 2025-09-08 01:03:01.950124 | orchestrator | =============================================================================== 2025-09-08 01:03:01.950137 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 71.30s 2025-09-08 01:03:01.950150 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.17s 2025-09-08 01:03:01.950163 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.36s 2025-09-08 01:03:01.950177 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.58s 2025-09-08 01:03:01.950189 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.39s 2025-09-08 01:03:01.950203 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.71s 2025-09-08 01:03:01.950217 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.84s 2025-09-08 01:03:01.950230 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.92s 2025-09-08 01:03:01.950250 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.80s 2025-09-08 01:03:01.950264 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.69s 2025-09-08 01:03:01.950277 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.61s 2025-09-08 01:03:01.950290 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.59s 2025-09-08 01:03:01.950304 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.50s 2025-09-08 01:03:01.950317 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.28s 2025-09-08 01:03:01.950331 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.01s 2025-09-08 01:03:01.950343 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.99s 2025-09-08 01:03:01.950357 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.99s 2025-09-08 01:03:01.950370 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.91s 2025-09-08 01:03:01.950383 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.69s 2025-09-08 01:03:01.950398 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.43s 2025-09-08 01:03:01.950411 | orchestrator | 2025-09-08 01:03:01 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:01.950443 | orchestrator | 2025-09-08 01:03:01 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:01.950456 | orchestrator | 2025-09-08 01:03:01 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:01.950468 | orchestrator | 2025-09-08 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:04.995869 | orchestrator | 2025-09-08 01:03:04 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:04.995985 | orchestrator | 2025-09-08 01:03:04 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:04.996258 | orchestrator | 2025-09-08 01:03:04 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:04.996829 | orchestrator | 2025-09-08 01:03:04 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:04.996847 | orchestrator | 2025-09-08 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:08.039561 | orchestrator | 2025-09-08 01:03:08 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:08.040071 | orchestrator | 2025-09-08 01:03:08 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:08.040191 | orchestrator | 2025-09-08 01:03:08 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:08.040959 | orchestrator | 2025-09-08 01:03:08 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:08.040982 | orchestrator | 2025-09-08 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:11.063381 | orchestrator | 2025-09-08 01:03:11 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:11.063629 | orchestrator | 2025-09-08 01:03:11 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:11.064375 | orchestrator | 2025-09-08 01:03:11 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:11.065247 | orchestrator | 2025-09-08 01:03:11 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:11.065281 | orchestrator | 2025-09-08 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:14.095561 | orchestrator | 2025-09-08 01:03:14 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:14.095721 | orchestrator | 2025-09-08 01:03:14 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:14.095804 | orchestrator | 2025-09-08 01:03:14 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:14.096453 | orchestrator | 2025-09-08 01:03:14 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:14.096526 | orchestrator | 2025-09-08 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:17.118391 | orchestrator | 2025-09-08 01:03:17 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:17.118550 | orchestrator | 2025-09-08 01:03:17 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:17.118636 | orchestrator | 2025-09-08 01:03:17 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:17.119139 | orchestrator | 2025-09-08 01:03:17 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:17.119162 | orchestrator | 2025-09-08 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:20.145121 | orchestrator | 2025-09-08 01:03:20 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:20.145595 | orchestrator | 2025-09-08 01:03:20 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:20.147873 | orchestrator | 2025-09-08 01:03:20 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:20.148787 | orchestrator | 2025-09-08 01:03:20 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:20.148975 | orchestrator | 2025-09-08 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:23.173892 | orchestrator | 2025-09-08 01:03:23 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:23.174005 | orchestrator | 2025-09-08 01:03:23 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:23.174623 | orchestrator | 2025-09-08 01:03:23 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:23.175338 | orchestrator | 2025-09-08 01:03:23 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:23.175360 | orchestrator | 2025-09-08 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:26.206942 | orchestrator | 2025-09-08 01:03:26 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:26.211697 | orchestrator | 2025-09-08 01:03:26 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:26.214485 | orchestrator | 2025-09-08 01:03:26 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:26.214567 | orchestrator | 2025-09-08 01:03:26 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:26.214917 | orchestrator | 2025-09-08 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:29.260393 | orchestrator | 2025-09-08 01:03:29 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:29.264586 | orchestrator | 2025-09-08 01:03:29 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:29.264678 | orchestrator | 2025-09-08 01:03:29 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:29.264693 | orchestrator | 2025-09-08 01:03:29 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:29.264705 | orchestrator | 2025-09-08 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:32.291872 | orchestrator | 2025-09-08 01:03:32 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:32.292058 | orchestrator | 2025-09-08 01:03:32 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:32.292639 | orchestrator | 2025-09-08 01:03:32 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:32.293291 | orchestrator | 2025-09-08 01:03:32 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:32.293358 | orchestrator | 2025-09-08 01:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:35.325537 | orchestrator | 2025-09-08 01:03:35 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:35.330237 | orchestrator | 2025-09-08 01:03:35 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:35.330322 | orchestrator | 2025-09-08 01:03:35 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:35.330335 | orchestrator | 2025-09-08 01:03:35 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:35.330348 | orchestrator | 2025-09-08 01:03:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:38.356093 | orchestrator | 2025-09-08 01:03:38 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:38.356196 | orchestrator | 2025-09-08 01:03:38 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:38.356210 | orchestrator | 2025-09-08 01:03:38 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:38.356221 | orchestrator | 2025-09-08 01:03:38 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:38.356270 | orchestrator | 2025-09-08 01:03:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:41.383602 | orchestrator | 2025-09-08 01:03:41 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:41.384576 | orchestrator | 2025-09-08 01:03:41 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:41.384922 | orchestrator | 2025-09-08 01:03:41 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:41.385410 | orchestrator | 2025-09-08 01:03:41 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:41.385430 | orchestrator | 2025-09-08 01:03:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:44.414552 | orchestrator | 2025-09-08 01:03:44 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:44.414785 | orchestrator | 2025-09-08 01:03:44 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:44.415184 | orchestrator | 2025-09-08 01:03:44 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:44.415934 | orchestrator | 2025-09-08 01:03:44 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:44.415958 | orchestrator | 2025-09-08 01:03:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:47.443690 | orchestrator | 2025-09-08 01:03:47 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:47.443805 | orchestrator | 2025-09-08 01:03:47 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:47.444333 | orchestrator | 2025-09-08 01:03:47 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:47.444769 | orchestrator | 2025-09-08 01:03:47 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:47.444791 | orchestrator | 2025-09-08 01:03:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:50.470570 | orchestrator | 2025-09-08 01:03:50 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:50.471982 | orchestrator | 2025-09-08 01:03:50 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:50.473187 | orchestrator | 2025-09-08 01:03:50 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:50.473945 | orchestrator | 2025-09-08 01:03:50 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:50.473971 | orchestrator | 2025-09-08 01:03:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:53.498775 | orchestrator | 2025-09-08 01:03:53 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:53.499365 | orchestrator | 2025-09-08 01:03:53 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:53.500251 | orchestrator | 2025-09-08 01:03:53 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:53.500989 | orchestrator | 2025-09-08 01:03:53 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:53.501015 | orchestrator | 2025-09-08 01:03:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:56.538963 | orchestrator | 2025-09-08 01:03:56 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:56.539191 | orchestrator | 2025-09-08 01:03:56 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:56.540121 | orchestrator | 2025-09-08 01:03:56 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:56.541105 | orchestrator | 2025-09-08 01:03:56 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:56.541136 | orchestrator | 2025-09-08 01:03:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:59.574671 | orchestrator | 2025-09-08 01:03:59 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:03:59.574906 | orchestrator | 2025-09-08 01:03:59 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:03:59.575883 | orchestrator | 2025-09-08 01:03:59 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:03:59.576688 | orchestrator | 2025-09-08 01:03:59 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:03:59.576709 | orchestrator | 2025-09-08 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:02.606286 | orchestrator | 2025-09-08 01:04:02 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:02.606631 | orchestrator | 2025-09-08 01:04:02 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:02.608108 | orchestrator | 2025-09-08 01:04:02 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:02.608132 | orchestrator | 2025-09-08 01:04:02 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:02.608144 | orchestrator | 2025-09-08 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:05.642812 | orchestrator | 2025-09-08 01:04:05 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:05.645081 | orchestrator | 2025-09-08 01:04:05 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:05.646798 | orchestrator | 2025-09-08 01:04:05 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:05.647344 | orchestrator | 2025-09-08 01:04:05 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:05.647413 | orchestrator | 2025-09-08 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:08.678648 | orchestrator | 2025-09-08 01:04:08 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:08.678885 | orchestrator | 2025-09-08 01:04:08 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:08.679529 | orchestrator | 2025-09-08 01:04:08 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:08.680594 | orchestrator | 2025-09-08 01:04:08 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:08.680626 | orchestrator | 2025-09-08 01:04:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:11.702358 | orchestrator | 2025-09-08 01:04:11 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:11.702476 | orchestrator | 2025-09-08 01:04:11 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:11.703727 | orchestrator | 2025-09-08 01:04:11 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:11.704224 | orchestrator | 2025-09-08 01:04:11 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:11.704245 | orchestrator | 2025-09-08 01:04:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:14.728082 | orchestrator | 2025-09-08 01:04:14 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:14.728348 | orchestrator | 2025-09-08 01:04:14 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:14.728987 | orchestrator | 2025-09-08 01:04:14 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:14.730161 | orchestrator | 2025-09-08 01:04:14 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:14.730183 | orchestrator | 2025-09-08 01:04:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:17.753074 | orchestrator | 2025-09-08 01:04:17 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:17.753251 | orchestrator | 2025-09-08 01:04:17 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:17.753857 | orchestrator | 2025-09-08 01:04:17 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:17.754714 | orchestrator | 2025-09-08 01:04:17 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:17.754741 | orchestrator | 2025-09-08 01:04:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:20.780864 | orchestrator | 2025-09-08 01:04:20 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:20.781601 | orchestrator | 2025-09-08 01:04:20 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:20.781633 | orchestrator | 2025-09-08 01:04:20 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:20.782285 | orchestrator | 2025-09-08 01:04:20 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:20.782472 | orchestrator | 2025-09-08 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:23.810353 | orchestrator | 2025-09-08 01:04:23 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:23.810591 | orchestrator | 2025-09-08 01:04:23 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:23.811141 | orchestrator | 2025-09-08 01:04:23 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:23.814436 | orchestrator | 2025-09-08 01:04:23 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:23.814557 | orchestrator | 2025-09-08 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:26.834485 | orchestrator | 2025-09-08 01:04:26 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:26.834785 | orchestrator | 2025-09-08 01:04:26 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:26.835264 | orchestrator | 2025-09-08 01:04:26 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:26.837278 | orchestrator | 2025-09-08 01:04:26 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:26.837307 | orchestrator | 2025-09-08 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:29.859036 | orchestrator | 2025-09-08 01:04:29 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state STARTED 2025-09-08 01:04:29.859238 | orchestrator | 2025-09-08 01:04:29 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:29.860405 | orchestrator | 2025-09-08 01:04:29 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:29.861100 | orchestrator | 2025-09-08 01:04:29 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:29.861207 | orchestrator | 2025-09-08 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:32.898992 | orchestrator | 2025-09-08 01:04:32 | INFO  | Task e954a423-e94a-4812-a61a-66c8b11eaa49 is in state SUCCESS 2025-09-08 01:04:32.900935 | orchestrator | 2025-09-08 01:04:32.901029 | orchestrator | 2025-09-08 01:04:32.901051 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:04:32.901071 | orchestrator | 2025-09-08 01:04:32.901131 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:04:32.901152 | orchestrator | Monday 08 September 2025 01:02:32 +0000 (0:00:00.278) 0:00:00.278 ****** 2025-09-08 01:04:32.901171 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:04:32.901191 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:04:32.901210 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:04:32.901224 | orchestrator | 2025-09-08 01:04:32.901236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:04:32.901247 | orchestrator | Monday 08 September 2025 01:02:32 +0000 (0:00:00.301) 0:00:00.579 ****** 2025-09-08 01:04:32.901258 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-08 01:04:32.901270 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-08 01:04:32.901280 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-08 01:04:32.901291 | orchestrator | 2025-09-08 01:04:32.901302 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-08 01:04:32.901312 | orchestrator | 2025-09-08 01:04:32.901323 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-08 01:04:32.901368 | orchestrator | Monday 08 September 2025 01:02:33 +0000 (0:00:00.395) 0:00:00.975 ****** 2025-09-08 01:04:32.901380 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:04:32.901393 | orchestrator | 2025-09-08 01:04:32.901404 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-08 01:04:32.901414 | orchestrator | Monday 08 September 2025 01:02:33 +0000 (0:00:00.543) 0:00:01.518 ****** 2025-09-08 01:04:32.901443 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-08 01:04:32.901455 | orchestrator | 2025-09-08 01:04:32.901466 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-08 01:04:32.901477 | orchestrator | Monday 08 September 2025 01:02:37 +0000 (0:00:03.432) 0:00:04.951 ****** 2025-09-08 01:04:32.901489 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-08 01:04:32.901537 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-08 01:04:32.901550 | orchestrator | 2025-09-08 01:04:32.901563 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-08 01:04:32.901576 | orchestrator | Monday 08 September 2025 01:02:43 +0000 (0:00:06.396) 0:00:11.348 ****** 2025-09-08 01:04:32.901589 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:04:32.901602 | orchestrator | 2025-09-08 01:04:32.901614 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-08 01:04:32.901633 | orchestrator | Monday 08 September 2025 01:02:46 +0000 (0:00:03.479) 0:00:14.827 ****** 2025-09-08 01:04:32.901651 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:04:32.901670 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-08 01:04:32.901688 | orchestrator | 2025-09-08 01:04:32.901706 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-08 01:04:32.901725 | orchestrator | Monday 08 September 2025 01:02:50 +0000 (0:00:03.898) 0:00:18.726 ****** 2025-09-08 01:04:32.901744 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:04:32.901764 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-08 01:04:32.901782 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-08 01:04:32.901831 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-08 01:04:32.901852 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-08 01:04:32.901866 | orchestrator | 2025-09-08 01:04:32.901876 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-08 01:04:32.901888 | orchestrator | Monday 08 September 2025 01:03:06 +0000 (0:00:15.411) 0:00:34.137 ****** 2025-09-08 01:04:32.901907 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-08 01:04:32.901925 | orchestrator | 2025-09-08 01:04:32.901943 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-08 01:04:32.901961 | orchestrator | Monday 08 September 2025 01:03:10 +0000 (0:00:04.033) 0:00:38.171 ****** 2025-09-08 01:04:32.901985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.902103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.902143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.902165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902331 | orchestrator | 2025-09-08 01:04:32.902352 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-08 01:04:32.902373 | orchestrator | Monday 08 September 2025 01:03:12 +0000 (0:00:02.300) 0:00:40.471 ****** 2025-09-08 01:04:32.902393 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-08 01:04:32.902413 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-08 01:04:32.902431 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-08 01:04:32.902451 | orchestrator | 2025-09-08 01:04:32.902470 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-08 01:04:32.902520 | orchestrator | Monday 08 September 2025 01:03:14 +0000 (0:00:01.839) 0:00:42.311 ****** 2025-09-08 01:04:32.902541 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:32.902559 | orchestrator | 2025-09-08 01:04:32.902577 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-08 01:04:32.902595 | orchestrator | Monday 08 September 2025 01:03:14 +0000 (0:00:00.200) 0:00:42.511 ****** 2025-09-08 01:04:32.902614 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:32.902675 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:32.902697 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:32.902716 | orchestrator | 2025-09-08 01:04:32.902735 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-08 01:04:32.902754 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:00.434) 0:00:42.946 ****** 2025-09-08 01:04:32.902773 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:04:32.902792 | orchestrator | 2025-09-08 01:04:32.902809 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-08 01:04:32.902827 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:00.458) 0:00:43.405 ****** 2025-09-08 01:04:32.902846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.902879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.902906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.902938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.902997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.903027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.903048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.903066 | orchestrator | 2025-09-08 01:04:32.903084 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-08 01:04:32.903115 | orchestrator | Monday 08 September 2025 01:03:19 +0000 (0:00:03.546) 0:00:46.951 ****** 2025-09-08 01:04:32.903141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.903161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903201 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:32.903228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.903248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903302 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:32.903321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.903340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903378 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:32.903397 | orchestrator | 2025-09-08 01:04:32.903416 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-08 01:04:32.903435 | orchestrator | Monday 08 September 2025 01:03:20 +0000 (0:00:01.560) 0:00:48.512 ****** 2025-09-08 01:04:32.903466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.903547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903594 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:32.903613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.903634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903703 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:32.903735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.903805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.903848 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:32.903866 | orchestrator | 2025-09-08 01:04:32.903884 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-08 01:04:32.903903 | orchestrator | Monday 08 September 2025 01:03:21 +0000 (0:00:01.018) 0:00:49.531 ****** 2025-09-08 01:04:32.903923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.903954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.903988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.904017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904162 | orchestrator | 2025-09-08 01:04:32.904181 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-08 01:04:32.904201 | orchestrator | Monday 08 September 2025 01:03:24 +0000 (0:00:03.258) 0:00:52.789 ****** 2025-09-08 01:04:32.904220 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:32.904239 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:32.904258 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:32.904276 | orchestrator | 2025-09-08 01:04:32.904296 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-08 01:04:32.904316 | orchestrator | Monday 08 September 2025 01:03:27 +0000 (0:00:02.223) 0:00:55.013 ****** 2025-09-08 01:04:32.904336 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:04:32.904356 | orchestrator | 2025-09-08 01:04:32.904376 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-08 01:04:32.904404 | orchestrator | Monday 08 September 2025 01:03:28 +0000 (0:00:01.212) 0:00:56.225 ****** 2025-09-08 01:04:32.904424 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:32.904444 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:32.904464 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:32.904483 | orchestrator | 2025-09-08 01:04:32.904534 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-08 01:04:32.904553 | orchestrator | Monday 08 September 2025 01:03:28 +0000 (0:00:00.592) 0:00:56.817 ****** 2025-09-08 01:04:32.904573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.904593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.904627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.904640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.904754 | orchestrator | 2025-09-08 01:04:32.904770 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-08 01:04:32.904786 | orchestrator | Monday 08 September 2025 01:03:37 +0000 (0:00:08.276) 0:01:05.094 ****** 2025-09-08 01:04:32.904813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.904837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.904855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.904871 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:32.904887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.904914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.904941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.904958 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:32.904980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:32.904999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.905016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:32.905033 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:32.905050 | orchestrator | 2025-09-08 01:04:32.905067 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-08 01:04:32.905094 | orchestrator | Monday 08 September 2025 01:03:38 +0000 (0:00:00.940) 0:01:06.034 ****** 2025-09-08 01:04:32.905112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.905141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.905167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:32.905185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.905202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.905228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.905245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.905272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.905288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:32.905303 | orchestrator | 2025-09-08 01:04:32.905319 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-08 01:04:32.905335 | orchestrator | Monday 08 September 2025 01:03:41 +0000 (0:00:03.283) 0:01:09.318 ****** 2025-09-08 01:04:32.905351 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:32.905366 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:32.905393 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:32.905409 | orchestrator | 2025-09-08 01:04:32.905425 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-08 01:04:32.905440 | orchestrator | Monday 08 September 2025 01:03:41 +0000 (0:00:00.509) 0:01:09.827 ****** 2025-09-08 01:04:32.905454 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:32.905469 | orchestrator | 2025-09-08 01:04:32.905485 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-08 01:04:32.905525 | orchestrator | Monday 08 September 2025 01:03:44 +0000 (0:00:02.227) 0:01:12.055 ****** 2025-09-08 01:04:32.905541 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:32.905558 | orchestrator | 2025-09-08 01:04:32.905574 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-08 01:04:32.905590 | orchestrator | Monday 08 September 2025 01:03:46 +0000 (0:00:02.346) 0:01:14.402 ****** 2025-09-08 01:04:32.905618 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:32.905635 | orchestrator | 2025-09-08 01:04:32.905650 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-08 01:04:32.905665 | orchestrator | Monday 08 September 2025 01:03:58 +0000 (0:00:12.185) 0:01:26.587 ****** 2025-09-08 01:04:32.905675 | orchestrator | 2025-09-08 01:04:32.905685 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-08 01:04:32.905694 | orchestrator | Monday 08 September 2025 01:03:58 +0000 (0:00:00.080) 0:01:26.667 ****** 2025-09-08 01:04:32.905703 | orchestrator | 2025-09-08 01:04:32.905713 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-08 01:04:32.905722 | orchestrator | Monday 08 September 2025 01:03:58 +0000 (0:00:00.066) 0:01:26.734 ****** 2025-09-08 01:04:32.905731 | orchestrator | 2025-09-08 01:04:32.905741 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-08 01:04:32.905751 | orchestrator | Monday 08 September 2025 01:03:58 +0000 (0:00:00.071) 0:01:26.806 ****** 2025-09-08 01:04:32.905768 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:32.905784 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:32.905799 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:32.905815 | orchestrator | 2025-09-08 01:04:32.905831 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-08 01:04:32.905848 | orchestrator | Monday 08 September 2025 01:04:13 +0000 (0:00:14.216) 0:01:41.023 ****** 2025-09-08 01:04:32.905864 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:32.905880 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:32.905897 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:32.905913 | orchestrator | 2025-09-08 01:04:32.905926 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-08 01:04:32.905936 | orchestrator | Monday 08 September 2025 01:04:20 +0000 (0:00:07.872) 0:01:48.895 ****** 2025-09-08 01:04:32.905945 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:32.905955 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:32.905964 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:32.905973 | orchestrator | 2025-09-08 01:04:32.905983 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:04:32.905994 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:04:32.906005 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:04:32.906015 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:04:32.906060 | orchestrator | 2025-09-08 01:04:32.906070 | orchestrator | 2025-09-08 01:04:32.906079 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:04:32.906089 | orchestrator | Monday 08 September 2025 01:04:32 +0000 (0:00:11.446) 0:02:00.341 ****** 2025-09-08 01:04:32.906099 | orchestrator | =============================================================================== 2025-09-08 01:04:32.906108 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.41s 2025-09-08 01:04:32.906127 | orchestrator | barbican : Restart barbican-api container ------------------------------ 14.22s 2025-09-08 01:04:32.906137 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.19s 2025-09-08 01:04:32.906147 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.45s 2025-09-08 01:04:32.906156 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.28s 2025-09-08 01:04:32.906166 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.87s 2025-09-08 01:04:32.906176 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.40s 2025-09-08 01:04:32.906185 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.03s 2025-09-08 01:04:32.906203 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.90s 2025-09-08 01:04:32.906212 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.55s 2025-09-08 01:04:32.906222 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.48s 2025-09-08 01:04:32.906232 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.43s 2025-09-08 01:04:32.906241 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.28s 2025-09-08 01:04:32.906251 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.26s 2025-09-08 01:04:32.906260 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.35s 2025-09-08 01:04:32.906270 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.30s 2025-09-08 01:04:32.906285 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.23s 2025-09-08 01:04:32.906295 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.22s 2025-09-08 01:04:32.906305 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.84s 2025-09-08 01:04:32.906314 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.56s 2025-09-08 01:04:32.906324 | orchestrator | 2025-09-08 01:04:32 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:32.906479 | orchestrator | 2025-09-08 01:04:32 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:32.906513 | orchestrator | 2025-09-08 01:04:32 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:32.906524 | orchestrator | 2025-09-08 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:35.942275 | orchestrator | 2025-09-08 01:04:35 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:35.942555 | orchestrator | 2025-09-08 01:04:35 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:35.943076 | orchestrator | 2025-09-08 01:04:35 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:35.944089 | orchestrator | 2025-09-08 01:04:35 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:35.944111 | orchestrator | 2025-09-08 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:38.979033 | orchestrator | 2025-09-08 01:04:38 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:38.980346 | orchestrator | 2025-09-08 01:04:38 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:38.981717 | orchestrator | 2025-09-08 01:04:38 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:38.983007 | orchestrator | 2025-09-08 01:04:38 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:38.983227 | orchestrator | 2025-09-08 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:42.017842 | orchestrator | 2025-09-08 01:04:42 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:42.018282 | orchestrator | 2025-09-08 01:04:42 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:42.019962 | orchestrator | 2025-09-08 01:04:42 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:42.021494 | orchestrator | 2025-09-08 01:04:42 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:42.021839 | orchestrator | 2025-09-08 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:45.063366 | orchestrator | 2025-09-08 01:04:45 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:45.067303 | orchestrator | 2025-09-08 01:04:45 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:45.071410 | orchestrator | 2025-09-08 01:04:45 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:45.074419 | orchestrator | 2025-09-08 01:04:45 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:45.074858 | orchestrator | 2025-09-08 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:48.122436 | orchestrator | 2025-09-08 01:04:48 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:48.124797 | orchestrator | 2025-09-08 01:04:48 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:48.125890 | orchestrator | 2025-09-08 01:04:48 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:48.127952 | orchestrator | 2025-09-08 01:04:48 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:48.127978 | orchestrator | 2025-09-08 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:51.154419 | orchestrator | 2025-09-08 01:04:51 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:51.155555 | orchestrator | 2025-09-08 01:04:51 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:51.156235 | orchestrator | 2025-09-08 01:04:51 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:51.156917 | orchestrator | 2025-09-08 01:04:51 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:51.156957 | orchestrator | 2025-09-08 01:04:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:54.196644 | orchestrator | 2025-09-08 01:04:54 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:54.198379 | orchestrator | 2025-09-08 01:04:54 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:54.201073 | orchestrator | 2025-09-08 01:04:54 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:54.206245 | orchestrator | 2025-09-08 01:04:54 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:54.206256 | orchestrator | 2025-09-08 01:04:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:57.245891 | orchestrator | 2025-09-08 01:04:57 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:04:57.245998 | orchestrator | 2025-09-08 01:04:57 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:04:57.247570 | orchestrator | 2025-09-08 01:04:57 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:04:57.248279 | orchestrator | 2025-09-08 01:04:57 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:04:57.249096 | orchestrator | 2025-09-08 01:04:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:00.321942 | orchestrator | 2025-09-08 01:05:00 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:00.323038 | orchestrator | 2025-09-08 01:05:00 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:00.324323 | orchestrator | 2025-09-08 01:05:00 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:00.325383 | orchestrator | 2025-09-08 01:05:00 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:00.325434 | orchestrator | 2025-09-08 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:03.362739 | orchestrator | 2025-09-08 01:05:03 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:03.363272 | orchestrator | 2025-09-08 01:05:03 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:03.363572 | orchestrator | 2025-09-08 01:05:03 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:03.364469 | orchestrator | 2025-09-08 01:05:03 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:03.364494 | orchestrator | 2025-09-08 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:06.404655 | orchestrator | 2025-09-08 01:05:06 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:06.407001 | orchestrator | 2025-09-08 01:05:06 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:06.409106 | orchestrator | 2025-09-08 01:05:06 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:06.411467 | orchestrator | 2025-09-08 01:05:06 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:06.412548 | orchestrator | 2025-09-08 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:09.443179 | orchestrator | 2025-09-08 01:05:09 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:09.444463 | orchestrator | 2025-09-08 01:05:09 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:09.445462 | orchestrator | 2025-09-08 01:05:09 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:09.446421 | orchestrator | 2025-09-08 01:05:09 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:09.446446 | orchestrator | 2025-09-08 01:05:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:12.493011 | orchestrator | 2025-09-08 01:05:12 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:12.495909 | orchestrator | 2025-09-08 01:05:12 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:12.498008 | orchestrator | 2025-09-08 01:05:12 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:12.500165 | orchestrator | 2025-09-08 01:05:12 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:12.500204 | orchestrator | 2025-09-08 01:05:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:15.535111 | orchestrator | 2025-09-08 01:05:15 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:15.535249 | orchestrator | 2025-09-08 01:05:15 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:15.535265 | orchestrator | 2025-09-08 01:05:15 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:15.535277 | orchestrator | 2025-09-08 01:05:15 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:15.535288 | orchestrator | 2025-09-08 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:18.566615 | orchestrator | 2025-09-08 01:05:18 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:18.568475 | orchestrator | 2025-09-08 01:05:18 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:18.568604 | orchestrator | 2025-09-08 01:05:18 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:18.569408 | orchestrator | 2025-09-08 01:05:18 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:18.569442 | orchestrator | 2025-09-08 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:21.597047 | orchestrator | 2025-09-08 01:05:21 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:21.598105 | orchestrator | 2025-09-08 01:05:21 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:21.598796 | orchestrator | 2025-09-08 01:05:21 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:21.600296 | orchestrator | 2025-09-08 01:05:21 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:21.600322 | orchestrator | 2025-09-08 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:24.634076 | orchestrator | 2025-09-08 01:05:24 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:24.635029 | orchestrator | 2025-09-08 01:05:24 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:24.636022 | orchestrator | 2025-09-08 01:05:24 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:24.637749 | orchestrator | 2025-09-08 01:05:24 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:24.637763 | orchestrator | 2025-09-08 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:27.661290 | orchestrator | 2025-09-08 01:05:27 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:27.662473 | orchestrator | 2025-09-08 01:05:27 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:27.663219 | orchestrator | 2025-09-08 01:05:27 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:27.663807 | orchestrator | 2025-09-08 01:05:27 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:27.663975 | orchestrator | 2025-09-08 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:30.708025 | orchestrator | 2025-09-08 01:05:30 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:30.708406 | orchestrator | 2025-09-08 01:05:30 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:30.710894 | orchestrator | 2025-09-08 01:05:30 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:30.713342 | orchestrator | 2025-09-08 01:05:30 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:30.713365 | orchestrator | 2025-09-08 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:33.753811 | orchestrator | 2025-09-08 01:05:33 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:33.757862 | orchestrator | 2025-09-08 01:05:33 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:33.761239 | orchestrator | 2025-09-08 01:05:33 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:33.763976 | orchestrator | 2025-09-08 01:05:33 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:33.763996 | orchestrator | 2025-09-08 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:36.805692 | orchestrator | 2025-09-08 01:05:36 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:36.805918 | orchestrator | 2025-09-08 01:05:36 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:36.807070 | orchestrator | 2025-09-08 01:05:36 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:36.807924 | orchestrator | 2025-09-08 01:05:36 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:36.808396 | orchestrator | 2025-09-08 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:39.852286 | orchestrator | 2025-09-08 01:05:39 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:39.853229 | orchestrator | 2025-09-08 01:05:39 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:39.856021 | orchestrator | 2025-09-08 01:05:39 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:39.857842 | orchestrator | 2025-09-08 01:05:39 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:39.857877 | orchestrator | 2025-09-08 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:42.907491 | orchestrator | 2025-09-08 01:05:42 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:42.910224 | orchestrator | 2025-09-08 01:05:42 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:42.912368 | orchestrator | 2025-09-08 01:05:42 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:42.914896 | orchestrator | 2025-09-08 01:05:42 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:42.915116 | orchestrator | 2025-09-08 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:45.963176 | orchestrator | 2025-09-08 01:05:45 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:45.963916 | orchestrator | 2025-09-08 01:05:45 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:45.964848 | orchestrator | 2025-09-08 01:05:45 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:45.966479 | orchestrator | 2025-09-08 01:05:45 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:45.966649 | orchestrator | 2025-09-08 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:49.012008 | orchestrator | 2025-09-08 01:05:49 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:49.012159 | orchestrator | 2025-09-08 01:05:49 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:49.014667 | orchestrator | 2025-09-08 01:05:49 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:49.016005 | orchestrator | 2025-09-08 01:05:49 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:49.016025 | orchestrator | 2025-09-08 01:05:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:52.071811 | orchestrator | 2025-09-08 01:05:52 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:52.072202 | orchestrator | 2025-09-08 01:05:52 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:52.073435 | orchestrator | 2025-09-08 01:05:52 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:52.074470 | orchestrator | 2025-09-08 01:05:52 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:52.074533 | orchestrator | 2025-09-08 01:05:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:55.113650 | orchestrator | 2025-09-08 01:05:55 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:55.113759 | orchestrator | 2025-09-08 01:05:55 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:55.116395 | orchestrator | 2025-09-08 01:05:55 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:55.118898 | orchestrator | 2025-09-08 01:05:55 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:55.119242 | orchestrator | 2025-09-08 01:05:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:58.157089 | orchestrator | 2025-09-08 01:05:58 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:05:58.157921 | orchestrator | 2025-09-08 01:05:58 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:05:58.160031 | orchestrator | 2025-09-08 01:05:58 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:05:58.162220 | orchestrator | 2025-09-08 01:05:58 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:05:58.162276 | orchestrator | 2025-09-08 01:05:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:01.207363 | orchestrator | 2025-09-08 01:06:01 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:01.210359 | orchestrator | 2025-09-08 01:06:01 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:01.214374 | orchestrator | 2025-09-08 01:06:01 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:01.216670 | orchestrator | 2025-09-08 01:06:01 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:06:01.216693 | orchestrator | 2025-09-08 01:06:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:04.260954 | orchestrator | 2025-09-08 01:06:04 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:04.262369 | orchestrator | 2025-09-08 01:06:04 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:04.264146 | orchestrator | 2025-09-08 01:06:04 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:04.265160 | orchestrator | 2025-09-08 01:06:04 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:06:04.265182 | orchestrator | 2025-09-08 01:06:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:07.306862 | orchestrator | 2025-09-08 01:06:07 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:07.308774 | orchestrator | 2025-09-08 01:06:07 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:07.310180 | orchestrator | 2025-09-08 01:06:07 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:07.311927 | orchestrator | 2025-09-08 01:06:07 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:06:07.311956 | orchestrator | 2025-09-08 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:10.350486 | orchestrator | 2025-09-08 01:06:10 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:10.351825 | orchestrator | 2025-09-08 01:06:10 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:10.354180 | orchestrator | 2025-09-08 01:06:10 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:10.355936 | orchestrator | 2025-09-08 01:06:10 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:06:10.356075 | orchestrator | 2025-09-08 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:13.407730 | orchestrator | 2025-09-08 01:06:13 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:13.409345 | orchestrator | 2025-09-08 01:06:13 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:13.411701 | orchestrator | 2025-09-08 01:06:13 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:13.415223 | orchestrator | 2025-09-08 01:06:13 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state STARTED 2025-09-08 01:06:13.415257 | orchestrator | 2025-09-08 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:16.461351 | orchestrator | 2025-09-08 01:06:16 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:16.463712 | orchestrator | 2025-09-08 01:06:16 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:16.466134 | orchestrator | 2025-09-08 01:06:16 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:16.468144 | orchestrator | 2025-09-08 01:06:16 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:16.476277 | orchestrator | 2025-09-08 01:06:16 | INFO  | Task 585bab86-172b-492a-9c6a-a7464b042ce0 is in state SUCCESS 2025-09-08 01:06:16.479017 | orchestrator | 2025-09-08 01:06:16.479047 | orchestrator | 2025-09-08 01:06:16.479059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:06:16.479072 | orchestrator | 2025-09-08 01:06:16.479101 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:06:16.479114 | orchestrator | Monday 08 September 2025 01:02:06 +0000 (0:00:00.275) 0:00:00.275 ****** 2025-09-08 01:06:16.479126 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:16.479138 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:16.479150 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:16.479161 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:16.479172 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:16.479183 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:16.479194 | orchestrator | 2025-09-08 01:06:16.479205 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:06:16.479217 | orchestrator | Monday 08 September 2025 01:02:07 +0000 (0:00:00.788) 0:00:01.064 ****** 2025-09-08 01:06:16.479228 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-08 01:06:16.479240 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-08 01:06:16.479251 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-08 01:06:16.479262 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-08 01:06:16.479273 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-08 01:06:16.479284 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-08 01:06:16.479295 | orchestrator | 2025-09-08 01:06:16.479306 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-08 01:06:16.479317 | orchestrator | 2025-09-08 01:06:16.479329 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:16.479340 | orchestrator | Monday 08 September 2025 01:02:07 +0000 (0:00:00.604) 0:00:01.669 ****** 2025-09-08 01:06:16.479352 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:06:16.479364 | orchestrator | 2025-09-08 01:06:16.479401 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-08 01:06:16.479412 | orchestrator | Monday 08 September 2025 01:02:09 +0000 (0:00:01.176) 0:00:02.845 ****** 2025-09-08 01:06:16.479423 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:16.479434 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:16.479444 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:16.479455 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:16.479466 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:16.479476 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:16.479487 | orchestrator | 2025-09-08 01:06:16.479499 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-08 01:06:16.479510 | orchestrator | Monday 08 September 2025 01:02:10 +0000 (0:00:01.246) 0:00:04.091 ****** 2025-09-08 01:06:16.479521 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:16.479531 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:16.479542 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:16.479553 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:16.479563 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:16.479574 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:16.479609 | orchestrator | 2025-09-08 01:06:16.479623 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-08 01:06:16.479636 | orchestrator | Monday 08 September 2025 01:02:11 +0000 (0:00:01.067) 0:00:05.159 ****** 2025-09-08 01:06:16.479648 | orchestrator | ok: [testbed-node-0] => { 2025-09-08 01:06:16.479663 | orchestrator |  "changed": false, 2025-09-08 01:06:16.479675 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:16.479687 | orchestrator | } 2025-09-08 01:06:16.479699 | orchestrator | ok: [testbed-node-1] => { 2025-09-08 01:06:16.479712 | orchestrator |  "changed": false, 2025-09-08 01:06:16.479724 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:16.479746 | orchestrator | } 2025-09-08 01:06:16.479759 | orchestrator | ok: [testbed-node-2] => { 2025-09-08 01:06:16.479772 | orchestrator |  "changed": false, 2025-09-08 01:06:16.479784 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:16.479797 | orchestrator | } 2025-09-08 01:06:16.479810 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 01:06:16.479822 | orchestrator |  "changed": false, 2025-09-08 01:06:16.479843 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:16.479862 | orchestrator | } 2025-09-08 01:06:16.479875 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 01:06:16.479887 | orchestrator |  "changed": false, 2025-09-08 01:06:16.479900 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:16.479912 | orchestrator | } 2025-09-08 01:06:16.479924 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 01:06:16.479936 | orchestrator |  "changed": false, 2025-09-08 01:06:16.479948 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:16.479962 | orchestrator | } 2025-09-08 01:06:16.479975 | orchestrator | 2025-09-08 01:06:16.479987 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-08 01:06:16.479999 | orchestrator | Monday 08 September 2025 01:02:12 +0000 (0:00:00.812) 0:00:05.971 ****** 2025-09-08 01:06:16.480010 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.480020 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.480031 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.480041 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.480051 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.480062 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.480073 | orchestrator | 2025-09-08 01:06:16.480083 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-08 01:06:16.480094 | orchestrator | Monday 08 September 2025 01:02:12 +0000 (0:00:00.590) 0:00:06.561 ****** 2025-09-08 01:06:16.480104 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-08 01:06:16.480115 | orchestrator | 2025-09-08 01:06:16.480126 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-08 01:06:16.480136 | orchestrator | Monday 08 September 2025 01:02:16 +0000 (0:00:03.503) 0:00:10.065 ****** 2025-09-08 01:06:16.480155 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-08 01:06:16.480167 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-08 01:06:16.480178 | orchestrator | 2025-09-08 01:06:16.480202 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-08 01:06:16.480213 | orchestrator | Monday 08 September 2025 01:02:22 +0000 (0:00:06.480) 0:00:16.546 ****** 2025-09-08 01:06:16.480224 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:06:16.480235 | orchestrator | 2025-09-08 01:06:16.480251 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-08 01:06:16.480262 | orchestrator | Monday 08 September 2025 01:02:25 +0000 (0:00:03.062) 0:00:19.609 ****** 2025-09-08 01:06:16.480273 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:06:16.480284 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-08 01:06:16.480295 | orchestrator | 2025-09-08 01:06:16.480305 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-08 01:06:16.480316 | orchestrator | Monday 08 September 2025 01:02:29 +0000 (0:00:03.692) 0:00:23.301 ****** 2025-09-08 01:06:16.480326 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:06:16.480337 | orchestrator | 2025-09-08 01:06:16.480348 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-08 01:06:16.480358 | orchestrator | Monday 08 September 2025 01:02:32 +0000 (0:00:03.347) 0:00:26.649 ****** 2025-09-08 01:06:16.480369 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-08 01:06:16.480380 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-08 01:06:16.480390 | orchestrator | 2025-09-08 01:06:16.480401 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:16.480412 | orchestrator | Monday 08 September 2025 01:02:40 +0000 (0:00:07.490) 0:00:34.139 ****** 2025-09-08 01:06:16.480422 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.480433 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.480443 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.480454 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.480465 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.480475 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.480486 | orchestrator | 2025-09-08 01:06:16.480496 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-08 01:06:16.480507 | orchestrator | Monday 08 September 2025 01:02:41 +0000 (0:00:00.750) 0:00:34.889 ****** 2025-09-08 01:06:16.480517 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.480528 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.480539 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.480550 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.480560 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.480571 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.480619 | orchestrator | 2025-09-08 01:06:16.480631 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-08 01:06:16.480642 | orchestrator | Monday 08 September 2025 01:02:43 +0000 (0:00:01.963) 0:00:36.853 ****** 2025-09-08 01:06:16.480652 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:16.480663 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:16.480674 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:16.480685 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:16.480695 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:16.480712 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:16.480723 | orchestrator | 2025-09-08 01:06:16.480734 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-08 01:06:16.480745 | orchestrator | Monday 08 September 2025 01:02:44 +0000 (0:00:01.056) 0:00:37.910 ****** 2025-09-08 01:06:16.480755 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.480773 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.480784 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.480794 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.480805 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.480821 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.480832 | orchestrator | 2025-09-08 01:06:16.480843 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-08 01:06:16.480854 | orchestrator | Monday 08 September 2025 01:02:45 +0000 (0:00:01.709) 0:00:39.620 ****** 2025-09-08 01:06:16.480869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.480897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.480910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.480922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.480933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.480952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.480963 | orchestrator | 2025-09-08 01:06:16.480974 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-08 01:06:16.480985 | orchestrator | Monday 08 September 2025 01:02:48 +0000 (0:00:03.007) 0:00:42.628 ****** 2025-09-08 01:06:16.480996 | orchestrator | [WARNING]: Skipped 2025-09-08 01:06:16.481008 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-08 01:06:16.481019 | orchestrator | due to this access issue: 2025-09-08 01:06:16.481030 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-08 01:06:16.481041 | orchestrator | a directory 2025-09-08 01:06:16.481052 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:06:16.481063 | orchestrator | 2025-09-08 01:06:16.481074 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:16.481089 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:00.939) 0:00:43.567 ****** 2025-09-08 01:06:16.481106 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:06:16.481119 | orchestrator | 2025-09-08 01:06:16.481129 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-08 01:06:16.481140 | orchestrator | Monday 08 September 2025 01:02:51 +0000 (0:00:01.308) 0:00:44.876 ****** 2025-09-08 01:06:16.481151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.481164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.481182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.481195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.481219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.481232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.481243 | orchestrator | 2025-09-08 01:06:16.481254 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-08 01:06:16.481272 | orchestrator | Monday 08 September 2025 01:02:54 +0000 (0:00:03.402) 0:00:48.278 ****** 2025-09-08 01:06:16.481283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.481294 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.481305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.481317 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.481328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.481344 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.481361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481373 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.481384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481402 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.481413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481424 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.481435 | orchestrator | 2025-09-08 01:06:16.481446 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-08 01:06:16.481456 | orchestrator | Monday 08 September 2025 01:02:57 +0000 (0:00:03.025) 0:00:51.304 ****** 2025-09-08 01:06:16.481467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.481478 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.481500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.481512 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.481523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.481547 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.481558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481569 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.481595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481607 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.481618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481629 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.481640 | orchestrator | 2025-09-08 01:06:16.481651 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-08 01:06:16.481661 | orchestrator | Monday 08 September 2025 01:03:00 +0000 (0:00:03.291) 0:00:54.595 ****** 2025-09-08 01:06:16.481672 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.481683 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.481694 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.481705 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.481715 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.481726 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.481736 | orchestrator | 2025-09-08 01:06:16.481747 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-08 01:06:16.481765 | orchestrator | Monday 08 September 2025 01:03:03 +0000 (0:00:02.833) 0:00:57.429 ****** 2025-09-08 01:06:16.481782 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.481793 | orchestrator | 2025-09-08 01:06:16.481804 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-08 01:06:16.481820 | orchestrator | Monday 08 September 2025 01:03:03 +0000 (0:00:00.091) 0:00:57.520 ****** 2025-09-08 01:06:16.481831 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.481842 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.481852 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.481863 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.481873 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.481884 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.481895 | orchestrator | 2025-09-08 01:06:16.481905 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-08 01:06:16.481916 | orchestrator | Monday 08 September 2025 01:03:04 +0000 (0:00:00.931) 0:00:58.452 ****** 2025-09-08 01:06:16.481927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.481950 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.481961 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.481972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.481983 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.481994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.482058 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.482518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.482540 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.482552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.482563 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.482574 | orchestrator | 2025-09-08 01:06:16.482639 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-08 01:06:16.482651 | orchestrator | Monday 08 September 2025 01:03:07 +0000 (0:00:02.332) 0:01:00.785 ****** 2025-09-08 01:06:16.482662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.482674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.482711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.482724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.482736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.482747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.482759 | orchestrator | 2025-09-08 01:06:16.482770 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-08 01:06:16.482781 | orchestrator | Monday 08 September 2025 01:03:10 +0000 (0:00:03.346) 0:01:04.131 ****** 2025-09-08 01:06:16.482792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.482821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.482833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.482845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.482856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.482876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.482888 | orchestrator | 2025-09-08 01:06:16.482899 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-08 01:06:16.482910 | orchestrator | Monday 08 September 2025 01:03:16 +0000 (0:00:05.949) 0:01:10.081 ****** 2025-09-08 01:06:16.482934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.482946 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.482957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.482968 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.482979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.482991 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.483019 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.483042 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.483077 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483088 | orchestrator | 2025-09-08 01:06:16.483101 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-08 01:06:16.483114 | orchestrator | Monday 08 September 2025 01:03:19 +0000 (0:00:02.917) 0:01:12.998 ****** 2025-09-08 01:06:16.483126 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483139 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483151 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483162 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:16.483173 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:16.483184 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:16.483196 | orchestrator | 2025-09-08 01:06:16.483208 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-08 01:06:16.483219 | orchestrator | Monday 08 September 2025 01:03:22 +0000 (0:00:02.860) 0:01:15.859 ****** 2025-09-08 01:06:16.483231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.483249 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.483273 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.483296 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.483333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.483345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.483368 | orchestrator | 2025-09-08 01:06:16.483379 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-08 01:06:16.483391 | orchestrator | Monday 08 September 2025 01:03:25 +0000 (0:00:03.690) 0:01:19.549 ****** 2025-09-08 01:06:16.483401 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.483411 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.483420 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483430 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483439 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483449 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483458 | orchestrator | 2025-09-08 01:06:16.483468 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-08 01:06:16.483478 | orchestrator | Monday 08 September 2025 01:03:28 +0000 (0:00:02.732) 0:01:22.281 ****** 2025-09-08 01:06:16.483487 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.483497 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483506 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.483516 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483525 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483535 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483544 | orchestrator | 2025-09-08 01:06:16.483554 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-08 01:06:16.483563 | orchestrator | Monday 08 September 2025 01:03:31 +0000 (0:00:02.642) 0:01:24.924 ****** 2025-09-08 01:06:16.483573 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.483599 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.483609 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483619 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483628 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483638 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483647 | orchestrator | 2025-09-08 01:06:16.483657 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-08 01:06:16.483667 | orchestrator | Monday 08 September 2025 01:03:33 +0000 (0:00:02.151) 0:01:27.076 ****** 2025-09-08 01:06:16.483676 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.483686 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.483695 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483705 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483714 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483724 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483733 | orchestrator | 2025-09-08 01:06:16.483743 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-08 01:06:16.483752 | orchestrator | Monday 08 September 2025 01:03:36 +0000 (0:00:02.928) 0:01:30.005 ****** 2025-09-08 01:06:16.483762 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483772 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.483781 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483791 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.483805 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483816 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483825 | orchestrator | 2025-09-08 01:06:16.483835 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-08 01:06:16.483849 | orchestrator | Monday 08 September 2025 01:03:38 +0000 (0:00:02.558) 0:01:32.564 ****** 2025-09-08 01:06:16.483859 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.483869 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483878 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.483894 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.483904 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.483913 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.483923 | orchestrator | 2025-09-08 01:06:16.483932 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-08 01:06:16.483942 | orchestrator | Monday 08 September 2025 01:03:41 +0000 (0:00:02.613) 0:01:35.178 ****** 2025-09-08 01:06:16.483951 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:16.483961 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.483970 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:16.483980 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.483990 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:16.483999 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484009 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:16.484018 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484028 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:16.484037 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484047 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:16.484056 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484066 | orchestrator | 2025-09-08 01:06:16.484075 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-08 01:06:16.484085 | orchestrator | Monday 08 September 2025 01:03:43 +0000 (0:00:02.392) 0:01:37.570 ****** 2025-09-08 01:06:16.484095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.484105 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.484125 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.484160 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.484180 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.484200 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.484220 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484229 | orchestrator | 2025-09-08 01:06:16.484239 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-08 01:06:16.484249 | orchestrator | Monday 08 September 2025 01:03:45 +0000 (0:00:02.164) 0:01:39.734 ****** 2025-09-08 01:06:16.484259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.484275 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.484307 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.484327 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.484347 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.484367 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.484394 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484404 | orchestrator | 2025-09-08 01:06:16.484413 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-08 01:06:16.484423 | orchestrator | Monday 08 September 2025 01:03:48 +0000 (0:00:02.888) 0:01:42.622 ****** 2025-09-08 01:06:16.484433 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484447 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484457 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484466 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484476 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484490 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484500 | orchestrator | 2025-09-08 01:06:16.484509 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-08 01:06:16.484519 | orchestrator | Monday 08 September 2025 01:03:51 +0000 (0:00:02.172) 0:01:44.795 ****** 2025-09-08 01:06:16.484529 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484538 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484548 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484558 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:06:16.484567 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:06:16.484576 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:06:16.484625 | orchestrator | 2025-09-08 01:06:16.484635 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-08 01:06:16.484645 | orchestrator | Monday 08 September 2025 01:03:54 +0000 (0:00:03.957) 0:01:48.753 ****** 2025-09-08 01:06:16.484655 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484664 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484674 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484683 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484693 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484702 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484712 | orchestrator | 2025-09-08 01:06:16.484722 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-08 01:06:16.484731 | orchestrator | Monday 08 September 2025 01:03:58 +0000 (0:00:03.993) 0:01:52.746 ****** 2025-09-08 01:06:16.484741 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484750 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484760 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484769 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484779 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484788 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484798 | orchestrator | 2025-09-08 01:06:16.484807 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-08 01:06:16.484817 | orchestrator | Monday 08 September 2025 01:04:03 +0000 (0:00:04.662) 0:01:57.408 ****** 2025-09-08 01:06:16.484826 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484836 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484845 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484855 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484864 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484874 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484891 | orchestrator | 2025-09-08 01:06:16.484901 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-08 01:06:16.484910 | orchestrator | Monday 08 September 2025 01:04:06 +0000 (0:00:02.786) 0:02:00.195 ****** 2025-09-08 01:06:16.484920 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.484929 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.484939 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.484948 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.484958 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.484967 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.484977 | orchestrator | 2025-09-08 01:06:16.484986 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-08 01:06:16.484996 | orchestrator | Monday 08 September 2025 01:04:10 +0000 (0:00:03.623) 0:02:03.818 ****** 2025-09-08 01:06:16.485005 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.485012 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.485020 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.485028 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.485036 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.485044 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.485051 | orchestrator | 2025-09-08 01:06:16.485059 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-08 01:06:16.485067 | orchestrator | Monday 08 September 2025 01:04:12 +0000 (0:00:02.827) 0:02:06.646 ****** 2025-09-08 01:06:16.485075 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.485083 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.485090 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.485098 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.485106 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.485114 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.485122 | orchestrator | 2025-09-08 01:06:16.485129 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-08 01:06:16.485137 | orchestrator | Monday 08 September 2025 01:04:17 +0000 (0:00:04.539) 0:02:11.185 ****** 2025-09-08 01:06:16.485145 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.485153 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.485161 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.485168 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.485176 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.485184 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.485192 | orchestrator | 2025-09-08 01:06:16.485200 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-08 01:06:16.485207 | orchestrator | Monday 08 September 2025 01:04:19 +0000 (0:00:02.272) 0:02:13.458 ****** 2025-09-08 01:06:16.485215 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:16.485224 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.485231 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:16.485239 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.485247 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:16.485255 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.485263 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:16.485271 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.485283 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:16.485291 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.485303 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:16.485311 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.485319 | orchestrator | 2025-09-08 01:06:16.485332 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-08 01:06:16.485340 | orchestrator | Monday 08 September 2025 01:04:24 +0000 (0:00:04.464) 0:02:17.922 ****** 2025-09-08 01:06:16.485348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.485356 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.485364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.485373 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.485381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.485389 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.485397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:16.485405 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.485421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.485435 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.485443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:16.485451 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.485459 | orchestrator | 2025-09-08 01:06:16.485467 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-08 01:06:16.485475 | orchestrator | Monday 08 September 2025 01:04:26 +0000 (0:00:02.239) 0:02:20.161 ****** 2025-09-08 01:06:16.485483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.485491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.485504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.485526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:16.485535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.485543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:16.485551 | orchestrator | 2025-09-08 01:06:16.485559 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:16.485567 | orchestrator | Monday 08 September 2025 01:04:29 +0000 (0:00:03.358) 0:02:23.520 ****** 2025-09-08 01:06:16.485575 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:16.485596 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:16.485604 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:16.485611 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:16.485619 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:16.485627 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:16.485635 | orchestrator | 2025-09-08 01:06:16.485643 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-08 01:06:16.485651 | orchestrator | Monday 08 September 2025 01:04:30 +0000 (0:00:00.804) 0:02:24.324 ****** 2025-09-08 01:06:16.485659 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:16.485667 | orchestrator | 2025-09-08 01:06:16.485674 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-08 01:06:16.485682 | orchestrator | Monday 08 September 2025 01:04:32 +0000 (0:00:02.135) 0:02:26.460 ****** 2025-09-08 01:06:16.485690 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:16.485704 | orchestrator | 2025-09-08 01:06:16.485712 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-08 01:06:16.485720 | orchestrator | Monday 08 September 2025 01:04:34 +0000 (0:00:02.202) 0:02:28.663 ****** 2025-09-08 01:06:16.485727 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:16.485735 | orchestrator | 2025-09-08 01:06:16.485743 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:16.485751 | orchestrator | Monday 08 September 2025 01:05:19 +0000 (0:00:44.756) 0:03:13.419 ****** 2025-09-08 01:06:16.485759 | orchestrator | 2025-09-08 01:06:16.485766 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:16.485774 | orchestrator | Monday 08 September 2025 01:05:19 +0000 (0:00:00.087) 0:03:13.507 ****** 2025-09-08 01:06:16.485782 | orchestrator | 2025-09-08 01:06:16.485790 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:16.485798 | orchestrator | Monday 08 September 2025 01:05:20 +0000 (0:00:00.515) 0:03:14.022 ****** 2025-09-08 01:06:16.485806 | orchestrator | 2025-09-08 01:06:16.485813 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:16.485821 | orchestrator | Monday 08 September 2025 01:05:20 +0000 (0:00:00.103) 0:03:14.126 ****** 2025-09-08 01:06:16.485829 | orchestrator | 2025-09-08 01:06:16.485841 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:16.485849 | orchestrator | Monday 08 September 2025 01:05:20 +0000 (0:00:00.070) 0:03:14.196 ****** 2025-09-08 01:06:16.485857 | orchestrator | 2025-09-08 01:06:16.485868 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:16.485877 | orchestrator | Monday 08 September 2025 01:05:20 +0000 (0:00:00.070) 0:03:14.267 ****** 2025-09-08 01:06:16.485885 | orchestrator | 2025-09-08 01:06:16.485892 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-08 01:06:16.485900 | orchestrator | Monday 08 September 2025 01:05:20 +0000 (0:00:00.073) 0:03:14.340 ****** 2025-09-08 01:06:16.485908 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:16.485916 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:16.485924 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:16.485932 | orchestrator | 2025-09-08 01:06:16.485939 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-08 01:06:16.485947 | orchestrator | Monday 08 September 2025 01:05:47 +0000 (0:00:27.088) 0:03:41.428 ****** 2025-09-08 01:06:16.485955 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:06:16.485963 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:06:16.485971 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:06:16.485978 | orchestrator | 2025-09-08 01:06:16.485986 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:06:16.485994 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 01:06:16.486003 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-08 01:06:16.486011 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-08 01:06:16.486043 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 01:06:16.486051 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 01:06:16.486060 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 01:06:16.486067 | orchestrator | 2025-09-08 01:06:16.486075 | orchestrator | 2025-09-08 01:06:16.486089 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:06:16.486097 | orchestrator | Monday 08 September 2025 01:06:14 +0000 (0:00:26.885) 0:04:08.313 ****** 2025-09-08 01:06:16.486104 | orchestrator | =============================================================================== 2025-09-08 01:06:16.486112 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.76s 2025-09-08 01:06:16.486120 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.09s 2025-09-08 01:06:16.486128 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 26.89s 2025-09-08 01:06:16.486136 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.49s 2025-09-08 01:06:16.486144 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.48s 2025-09-08 01:06:16.486151 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.95s 2025-09-08 01:06:16.486159 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.66s 2025-09-08 01:06:16.486167 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.54s 2025-09-08 01:06:16.486175 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.46s 2025-09-08 01:06:16.486182 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.99s 2025-09-08 01:06:16.486190 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.96s 2025-09-08 01:06:16.486198 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.69s 2025-09-08 01:06:16.486206 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.69s 2025-09-08 01:06:16.486213 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.62s 2025-09-08 01:06:16.486221 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.50s 2025-09-08 01:06:16.486229 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.40s 2025-09-08 01:06:16.486237 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.36s 2025-09-08 01:06:16.486244 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.35s 2025-09-08 01:06:16.486252 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.35s 2025-09-08 01:06:16.486260 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.29s 2025-09-08 01:06:16.486268 | orchestrator | 2025-09-08 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:19.519633 | orchestrator | 2025-09-08 01:06:19 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:19.519906 | orchestrator | 2025-09-08 01:06:19 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:19.520750 | orchestrator | 2025-09-08 01:06:19 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:19.522491 | orchestrator | 2025-09-08 01:06:19 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:19.522541 | orchestrator | 2025-09-08 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:22.564934 | orchestrator | 2025-09-08 01:06:22 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:22.565337 | orchestrator | 2025-09-08 01:06:22 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:22.566281 | orchestrator | 2025-09-08 01:06:22 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:22.568613 | orchestrator | 2025-09-08 01:06:22 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:22.568641 | orchestrator | 2025-09-08 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:25.606702 | orchestrator | 2025-09-08 01:06:25 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:25.608179 | orchestrator | 2025-09-08 01:06:25 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:25.610198 | orchestrator | 2025-09-08 01:06:25 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:25.612108 | orchestrator | 2025-09-08 01:06:25 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:25.612430 | orchestrator | 2025-09-08 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:28.653651 | orchestrator | 2025-09-08 01:06:28 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:28.654943 | orchestrator | 2025-09-08 01:06:28 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:28.656676 | orchestrator | 2025-09-08 01:06:28 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:28.657889 | orchestrator | 2025-09-08 01:06:28 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:28.657913 | orchestrator | 2025-09-08 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:31.698274 | orchestrator | 2025-09-08 01:06:31 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:31.699463 | orchestrator | 2025-09-08 01:06:31 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:31.700773 | orchestrator | 2025-09-08 01:06:31 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:31.701988 | orchestrator | 2025-09-08 01:06:31 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:31.702011 | orchestrator | 2025-09-08 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:34.764691 | orchestrator | 2025-09-08 01:06:34 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:34.768731 | orchestrator | 2025-09-08 01:06:34 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:34.768753 | orchestrator | 2025-09-08 01:06:34 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:34.776860 | orchestrator | 2025-09-08 01:06:34 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:34.777473 | orchestrator | 2025-09-08 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:37.820070 | orchestrator | 2025-09-08 01:06:37 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:37.820370 | orchestrator | 2025-09-08 01:06:37 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:37.822516 | orchestrator | 2025-09-08 01:06:37 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state STARTED 2025-09-08 01:06:37.825735 | orchestrator | 2025-09-08 01:06:37 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:37.825758 | orchestrator | 2025-09-08 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:40.876394 | orchestrator | 2025-09-08 01:06:40 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:40.877808 | orchestrator | 2025-09-08 01:06:40 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:40.882332 | orchestrator | 2025-09-08 01:06:40 | INFO  | Task 8b6a579d-2d33-403b-bdd2-f1c0b1cc9acc is in state SUCCESS 2025-09-08 01:06:40.885210 | orchestrator | 2025-09-08 01:06:40.885251 | orchestrator | 2025-09-08 01:06:40.885292 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:06:40.885304 | orchestrator | 2025-09-08 01:06:40.885327 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:06:40.885339 | orchestrator | Monday 08 September 2025 01:03:06 +0000 (0:00:00.294) 0:00:00.294 ****** 2025-09-08 01:06:40.885350 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:40.885362 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:40.885372 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:40.885383 | orchestrator | 2025-09-08 01:06:40.885394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:06:40.885405 | orchestrator | Monday 08 September 2025 01:03:07 +0000 (0:00:00.282) 0:00:00.576 ****** 2025-09-08 01:06:40.885416 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-08 01:06:40.885427 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-08 01:06:40.885438 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-08 01:06:40.885449 | orchestrator | 2025-09-08 01:06:40.885460 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-08 01:06:40.885470 | orchestrator | 2025-09-08 01:06:40.885481 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:06:40.885492 | orchestrator | Monday 08 September 2025 01:03:08 +0000 (0:00:00.844) 0:00:01.421 ****** 2025-09-08 01:06:40.885799 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:06:40.885815 | orchestrator | 2025-09-08 01:06:40.885826 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-08 01:06:40.885837 | orchestrator | Monday 08 September 2025 01:03:08 +0000 (0:00:00.888) 0:00:02.309 ****** 2025-09-08 01:06:40.885848 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-08 01:06:40.885859 | orchestrator | 2025-09-08 01:06:40.885870 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-08 01:06:40.885881 | orchestrator | Monday 08 September 2025 01:03:12 +0000 (0:00:03.727) 0:00:06.037 ****** 2025-09-08 01:06:40.885892 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-08 01:06:40.885904 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-08 01:06:40.885915 | orchestrator | 2025-09-08 01:06:40.885925 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-08 01:06:40.885937 | orchestrator | Monday 08 September 2025 01:03:19 +0000 (0:00:06.354) 0:00:12.391 ****** 2025-09-08 01:06:40.885948 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:06:40.885959 | orchestrator | 2025-09-08 01:06:40.885970 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-08 01:06:40.885981 | orchestrator | Monday 08 September 2025 01:03:22 +0000 (0:00:03.134) 0:00:15.526 ****** 2025-09-08 01:06:40.885992 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:06:40.886003 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-08 01:06:40.886060 | orchestrator | 2025-09-08 01:06:40.886077 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-08 01:06:40.886088 | orchestrator | Monday 08 September 2025 01:03:25 +0000 (0:00:03.788) 0:00:19.314 ****** 2025-09-08 01:06:40.886099 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:06:40.886109 | orchestrator | 2025-09-08 01:06:40.886120 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-08 01:06:40.886215 | orchestrator | Monday 08 September 2025 01:03:29 +0000 (0:00:03.510) 0:00:22.824 ****** 2025-09-08 01:06:40.886230 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-08 01:06:40.886241 | orchestrator | 2025-09-08 01:06:40.886306 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-08 01:06:40.886320 | orchestrator | Monday 08 September 2025 01:03:33 +0000 (0:00:04.054) 0:00:26.878 ****** 2025-09-08 01:06:40.886347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.886861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.886885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.886898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.886910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.886936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.886949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.886993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887422 | orchestrator | 2025-09-08 01:06:40.887433 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-08 01:06:40.887453 | orchestrator | Monday 08 September 2025 01:03:37 +0000 (0:00:03.616) 0:00:30.495 ****** 2025-09-08 01:06:40.887465 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:40.887476 | orchestrator | 2025-09-08 01:06:40.887487 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-08 01:06:40.887497 | orchestrator | Monday 08 September 2025 01:03:37 +0000 (0:00:00.178) 0:00:30.674 ****** 2025-09-08 01:06:40.887508 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:40.887519 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:40.887530 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:40.887541 | orchestrator | 2025-09-08 01:06:40.887552 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:06:40.887562 | orchestrator | Monday 08 September 2025 01:03:37 +0000 (0:00:00.322) 0:00:30.996 ****** 2025-09-08 01:06:40.887573 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:06:40.887584 | orchestrator | 2025-09-08 01:06:40.887595 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-08 01:06:40.887606 | orchestrator | Monday 08 September 2025 01:03:38 +0000 (0:00:00.592) 0:00:31.589 ****** 2025-09-08 01:06:40.887617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.887787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.887803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.887815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.887970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.888008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.888037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.888049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.888067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.888079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.888090 | orchestrator | 2025-09-08 01:06:40.888101 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-08 01:06:40.888112 | orchestrator | Monday 08 September 2025 01:03:44 +0000 (0:00:06.677) 0:00:38.267 ****** 2025-09-08 01:06:40.888123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.888135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.888180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888230 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:40.888240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.888250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.888289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.888327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.888347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888401 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:40.888412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888454 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:40.888465 | orchestrator | 2025-09-08 01:06:40.888477 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-08 01:06:40.888488 | orchestrator | Monday 08 September 2025 01:03:45 +0000 (0:00:00.841) 0:00:39.108 ****** 2025-09-08 01:06:40.888500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.888511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.888552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888606 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:40.888618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.888664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.888704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888761 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:40.888771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.888781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.888792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.888871 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:40.888880 | orchestrator | 2025-09-08 01:06:40.888890 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-08 01:06:40.888900 | orchestrator | Monday 08 September 2025 01:03:48 +0000 (0:00:02.287) 0:00:41.396 ****** 2025-09-08 01:06:40.888909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.888920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.888960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.888983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.888993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889207 | orchestrator | 2025-09-08 01:06:40.889217 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-08 01:06:40.889227 | orchestrator | Monday 08 September 2025 01:03:54 +0000 (0:00:06.459) 0:00:47.856 ****** 2025-09-08 01:06:40.889237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.889247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.889265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.889304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889493 | orchestrator | 2025-09-08 01:06:40.889503 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-08 01:06:40.889512 | orchestrator | Monday 08 September 2025 01:04:18 +0000 (0:00:24.222) 0:01:12.078 ****** 2025-09-08 01:06:40.889522 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-08 01:06:40.889532 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-08 01:06:40.889541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-08 01:06:40.889551 | orchestrator | 2025-09-08 01:06:40.889560 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-08 01:06:40.889570 | orchestrator | Monday 08 September 2025 01:04:25 +0000 (0:00:06.953) 0:01:19.031 ****** 2025-09-08 01:06:40.889579 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-08 01:06:40.889588 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-08 01:06:40.889598 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-08 01:06:40.889608 | orchestrator | 2025-09-08 01:06:40.889617 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-08 01:06:40.889674 | orchestrator | Monday 08 September 2025 01:04:29 +0000 (0:00:03.705) 0:01:22.737 ****** 2025-09-08 01:06:40.889685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.889703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.889721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.889758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.889904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.889937 | orchestrator | 2025-09-08 01:06:40.889945 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-08 01:06:40.889953 | orchestrator | Monday 08 September 2025 01:04:32 +0000 (0:00:02.909) 0:01:25.647 ****** 2025-09-08 01:06:40.889961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.889975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.889983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.889999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890198 | orchestrator | 2025-09-08 01:06:40.890206 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:06:40.890214 | orchestrator | Monday 08 September 2025 01:04:34 +0000 (0:00:02.413) 0:01:28.060 ****** 2025-09-08 01:06:40.890222 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:40.890230 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:40.890245 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:40.890253 | orchestrator | 2025-09-08 01:06:40.890261 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-08 01:06:40.890269 | orchestrator | Monday 08 September 2025 01:04:35 +0000 (0:00:00.313) 0:01:28.374 ****** 2025-09-08 01:06:40.890277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.890286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.890294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890343 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:40.890352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.890360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.890368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890416 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:40.890424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:06:40.890432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:06:40.890441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:06:40.890489 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:40.890498 | orchestrator | 2025-09-08 01:06:40.890506 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-08 01:06:40.890514 | orchestrator | Monday 08 September 2025 01:04:36 +0000 (0:00:01.742) 0:01:30.116 ****** 2025-09-08 01:06:40.890522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.890531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.890539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:06:40.890547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:06:40.890745 | orchestrator | 2025-09-08 01:06:40.890758 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:06:40.890766 | orchestrator | Monday 08 September 2025 01:04:41 +0000 (0:00:04.511) 0:01:34.628 ****** 2025-09-08 01:06:40.890774 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:40.890782 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:40.890790 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:40.890798 | orchestrator | 2025-09-08 01:06:40.890806 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-08 01:06:40.890814 | orchestrator | Monday 08 September 2025 01:04:41 +0000 (0:00:00.311) 0:01:34.940 ****** 2025-09-08 01:06:40.890822 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-08 01:06:40.890830 | orchestrator | 2025-09-08 01:06:40.890838 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-08 01:06:40.890846 | orchestrator | Monday 08 September 2025 01:04:43 +0000 (0:00:02.100) 0:01:37.040 ****** 2025-09-08 01:06:40.890854 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 01:06:40.890862 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-08 01:06:40.890869 | orchestrator | 2025-09-08 01:06:40.890877 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-08 01:06:40.890885 | orchestrator | Monday 08 September 2025 01:04:45 +0000 (0:00:02.288) 0:01:39.329 ****** 2025-09-08 01:06:40.890893 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.890901 | orchestrator | 2025-09-08 01:06:40.890909 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-08 01:06:40.890916 | orchestrator | Monday 08 September 2025 01:05:03 +0000 (0:00:17.662) 0:01:56.991 ****** 2025-09-08 01:06:40.890924 | orchestrator | 2025-09-08 01:06:40.890932 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-08 01:06:40.890940 | orchestrator | Monday 08 September 2025 01:05:03 +0000 (0:00:00.328) 0:01:57.320 ****** 2025-09-08 01:06:40.890948 | orchestrator | 2025-09-08 01:06:40.890956 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-08 01:06:40.890964 | orchestrator | Monday 08 September 2025 01:05:04 +0000 (0:00:00.066) 0:01:57.386 ****** 2025-09-08 01:06:40.890972 | orchestrator | 2025-09-08 01:06:40.890980 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-08 01:06:40.890987 | orchestrator | Monday 08 September 2025 01:05:04 +0000 (0:00:00.071) 0:01:57.458 ****** 2025-09-08 01:06:40.890995 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:40.891003 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:40.891011 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.891019 | orchestrator | 2025-09-08 01:06:40.891027 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-08 01:06:40.891035 | orchestrator | Monday 08 September 2025 01:05:13 +0000 (0:00:09.251) 0:02:06.709 ****** 2025-09-08 01:06:40.891042 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.891050 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:40.891058 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:40.891066 | orchestrator | 2025-09-08 01:06:40.891074 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-08 01:06:40.891082 | orchestrator | Monday 08 September 2025 01:05:19 +0000 (0:00:06.167) 0:02:12.876 ****** 2025-09-08 01:06:40.891090 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.891098 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:40.891110 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:40.891118 | orchestrator | 2025-09-08 01:06:40.891126 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-08 01:06:40.891134 | orchestrator | Monday 08 September 2025 01:05:32 +0000 (0:00:12.687) 0:02:25.564 ****** 2025-09-08 01:06:40.891142 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.891150 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:40.891158 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:40.891165 | orchestrator | 2025-09-08 01:06:40.891173 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-08 01:06:40.891181 | orchestrator | Monday 08 September 2025 01:06:17 +0000 (0:00:45.448) 0:03:11.013 ****** 2025-09-08 01:06:40.891189 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.891197 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:40.891205 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:40.891212 | orchestrator | 2025-09-08 01:06:40.891220 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-08 01:06:40.891228 | orchestrator | Monday 08 September 2025 01:06:23 +0000 (0:00:05.584) 0:03:16.597 ****** 2025-09-08 01:06:40.891236 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.891244 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:40.891252 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:40.891260 | orchestrator | 2025-09-08 01:06:40.891268 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-08 01:06:40.891275 | orchestrator | Monday 08 September 2025 01:06:29 +0000 (0:00:06.644) 0:03:23.242 ****** 2025-09-08 01:06:40.891283 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:40.891291 | orchestrator | 2025-09-08 01:06:40.891299 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:06:40.891307 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:06:40.891316 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:06:40.891324 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:06:40.891332 | orchestrator | 2025-09-08 01:06:40.891340 | orchestrator | 2025-09-08 01:06:40.891352 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:06:40.891361 | orchestrator | Monday 08 September 2025 01:06:37 +0000 (0:00:07.541) 0:03:30.783 ****** 2025-09-08 01:06:40.891372 | orchestrator | =============================================================================== 2025-09-08 01:06:40.891381 | orchestrator | designate : Restart designate-producer container ----------------------- 45.45s 2025-09-08 01:06:40.891389 | orchestrator | designate : Copying over designate.conf -------------------------------- 24.22s 2025-09-08 01:06:40.891396 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.66s 2025-09-08 01:06:40.891404 | orchestrator | designate : Restart designate-central container ------------------------ 12.69s 2025-09-08 01:06:40.891412 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.25s 2025-09-08 01:06:40.891420 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.54s 2025-09-08 01:06:40.891428 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.95s 2025-09-08 01:06:40.891435 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.68s 2025-09-08 01:06:40.891443 | orchestrator | designate : Restart designate-worker container -------------------------- 6.64s 2025-09-08 01:06:40.891451 | orchestrator | designate : Copying over config.json files for services ----------------- 6.46s 2025-09-08 01:06:40.891459 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.35s 2025-09-08 01:06:40.891467 | orchestrator | designate : Restart designate-api container ----------------------------- 6.17s 2025-09-08 01:06:40.891480 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.58s 2025-09-08 01:06:40.891488 | orchestrator | designate : Check designate containers ---------------------------------- 4.51s 2025-09-08 01:06:40.891496 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.05s 2025-09-08 01:06:40.891503 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.79s 2025-09-08 01:06:40.891511 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.73s 2025-09-08 01:06:40.891519 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.71s 2025-09-08 01:06:40.891527 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.62s 2025-09-08 01:06:40.891535 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.51s 2025-09-08 01:06:40.891543 | orchestrator | 2025-09-08 01:06:40 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:40.891551 | orchestrator | 2025-09-08 01:06:40 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:06:40.891559 | orchestrator | 2025-09-08 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:43.934921 | orchestrator | 2025-09-08 01:06:43 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:43.937432 | orchestrator | 2025-09-08 01:06:43 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:43.939082 | orchestrator | 2025-09-08 01:06:43 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:43.940589 | orchestrator | 2025-09-08 01:06:43 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:06:43.940612 | orchestrator | 2025-09-08 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:46.986523 | orchestrator | 2025-09-08 01:06:46 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:46.986956 | orchestrator | 2025-09-08 01:06:46 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:46.987842 | orchestrator | 2025-09-08 01:06:46 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:46.989177 | orchestrator | 2025-09-08 01:06:46 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:06:46.989199 | orchestrator | 2025-09-08 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:50.037711 | orchestrator | 2025-09-08 01:06:50 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:50.040620 | orchestrator | 2025-09-08 01:06:50 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:50.040680 | orchestrator | 2025-09-08 01:06:50 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:50.041900 | orchestrator | 2025-09-08 01:06:50 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:06:50.041930 | orchestrator | 2025-09-08 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:53.080415 | orchestrator | 2025-09-08 01:06:53 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:53.080805 | orchestrator | 2025-09-08 01:06:53 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:53.082230 | orchestrator | 2025-09-08 01:06:53 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:53.084581 | orchestrator | 2025-09-08 01:06:53 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:06:53.084757 | orchestrator | 2025-09-08 01:06:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:56.119194 | orchestrator | 2025-09-08 01:06:56 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:56.119782 | orchestrator | 2025-09-08 01:06:56 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:56.120929 | orchestrator | 2025-09-08 01:06:56 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:56.122093 | orchestrator | 2025-09-08 01:06:56 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:06:56.122122 | orchestrator | 2025-09-08 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:59.158786 | orchestrator | 2025-09-08 01:06:59 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:06:59.159166 | orchestrator | 2025-09-08 01:06:59 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:06:59.161074 | orchestrator | 2025-09-08 01:06:59 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:06:59.161290 | orchestrator | 2025-09-08 01:06:59 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:06:59.161313 | orchestrator | 2025-09-08 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:02.205250 | orchestrator | 2025-09-08 01:07:02 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:02.210283 | orchestrator | 2025-09-08 01:07:02 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:02.213335 | orchestrator | 2025-09-08 01:07:02 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:02.215497 | orchestrator | 2025-09-08 01:07:02 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:02.215893 | orchestrator | 2025-09-08 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:05.253226 | orchestrator | 2025-09-08 01:07:05 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:05.254239 | orchestrator | 2025-09-08 01:07:05 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:05.256365 | orchestrator | 2025-09-08 01:07:05 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:05.258411 | orchestrator | 2025-09-08 01:07:05 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:05.258436 | orchestrator | 2025-09-08 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:08.309434 | orchestrator | 2025-09-08 01:07:08 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:08.311595 | orchestrator | 2025-09-08 01:07:08 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:08.313934 | orchestrator | 2025-09-08 01:07:08 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:08.316115 | orchestrator | 2025-09-08 01:07:08 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:08.316134 | orchestrator | 2025-09-08 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:11.356370 | orchestrator | 2025-09-08 01:07:11 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:11.357540 | orchestrator | 2025-09-08 01:07:11 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:11.359083 | orchestrator | 2025-09-08 01:07:11 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:11.360963 | orchestrator | 2025-09-08 01:07:11 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:11.360986 | orchestrator | 2025-09-08 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:14.404643 | orchestrator | 2025-09-08 01:07:14 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:14.406698 | orchestrator | 2025-09-08 01:07:14 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:14.407186 | orchestrator | 2025-09-08 01:07:14 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:14.408163 | orchestrator | 2025-09-08 01:07:14 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:14.408186 | orchestrator | 2025-09-08 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:17.456421 | orchestrator | 2025-09-08 01:07:17 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:17.458527 | orchestrator | 2025-09-08 01:07:17 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:17.461625 | orchestrator | 2025-09-08 01:07:17 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:17.465359 | orchestrator | 2025-09-08 01:07:17 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:17.465630 | orchestrator | 2025-09-08 01:07:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:20.518394 | orchestrator | 2025-09-08 01:07:20 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:20.521387 | orchestrator | 2025-09-08 01:07:20 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:20.523840 | orchestrator | 2025-09-08 01:07:20 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:20.525672 | orchestrator | 2025-09-08 01:07:20 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:20.525744 | orchestrator | 2025-09-08 01:07:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:23.566209 | orchestrator | 2025-09-08 01:07:23 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:23.570606 | orchestrator | 2025-09-08 01:07:23 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:23.573134 | orchestrator | 2025-09-08 01:07:23 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:23.575033 | orchestrator | 2025-09-08 01:07:23 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:23.575309 | orchestrator | 2025-09-08 01:07:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:26.624851 | orchestrator | 2025-09-08 01:07:26 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:26.625511 | orchestrator | 2025-09-08 01:07:26 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:26.626279 | orchestrator | 2025-09-08 01:07:26 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:26.627287 | orchestrator | 2025-09-08 01:07:26 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:26.627308 | orchestrator | 2025-09-08 01:07:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:29.668293 | orchestrator | 2025-09-08 01:07:29 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state STARTED 2025-09-08 01:07:29.668439 | orchestrator | 2025-09-08 01:07:29 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:29.668879 | orchestrator | 2025-09-08 01:07:29 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:29.670496 | orchestrator | 2025-09-08 01:07:29 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:29.670611 | orchestrator | 2025-09-08 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:32.717398 | orchestrator | 2025-09-08 01:07:32.717500 | orchestrator | 2025-09-08 01:07:32.717517 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:07:32.717529 | orchestrator | 2025-09-08 01:07:32.717540 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:07:32.717552 | orchestrator | Monday 08 September 2025 01:06:19 +0000 (0:00:00.303) 0:00:00.303 ****** 2025-09-08 01:07:32.717563 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:07:32.717575 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:07:32.717586 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:07:32.717597 | orchestrator | 2025-09-08 01:07:32.717608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:07:32.717619 | orchestrator | Monday 08 September 2025 01:06:19 +0000 (0:00:00.290) 0:00:00.593 ****** 2025-09-08 01:07:32.717631 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-08 01:07:32.717642 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-08 01:07:32.717653 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-08 01:07:32.717664 | orchestrator | 2025-09-08 01:07:32.717674 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-08 01:07:32.717685 | orchestrator | 2025-09-08 01:07:32.717696 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-08 01:07:32.717747 | orchestrator | Monday 08 September 2025 01:06:19 +0000 (0:00:00.506) 0:00:01.099 ****** 2025-09-08 01:07:32.717760 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:07:32.717772 | orchestrator | 2025-09-08 01:07:32.717798 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-08 01:07:32.717810 | orchestrator | Monday 08 September 2025 01:06:20 +0000 (0:00:00.598) 0:00:01.698 ****** 2025-09-08 01:07:32.717820 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-08 01:07:32.717831 | orchestrator | 2025-09-08 01:07:32.717842 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-08 01:07:32.717852 | orchestrator | Monday 08 September 2025 01:06:24 +0000 (0:00:03.776) 0:00:05.474 ****** 2025-09-08 01:07:32.717863 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-08 01:07:32.717875 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-08 01:07:32.717885 | orchestrator | 2025-09-08 01:07:32.717896 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-08 01:07:32.717909 | orchestrator | Monday 08 September 2025 01:06:31 +0000 (0:00:07.338) 0:00:12.813 ****** 2025-09-08 01:07:32.718000 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:07:32.718014 | orchestrator | 2025-09-08 01:07:32.718101 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-08 01:07:32.718113 | orchestrator | Monday 08 September 2025 01:06:35 +0000 (0:00:03.760) 0:00:16.574 ****** 2025-09-08 01:07:32.718123 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:07:32.718134 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-08 01:07:32.718145 | orchestrator | 2025-09-08 01:07:32.718156 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-08 01:07:32.718166 | orchestrator | Monday 08 September 2025 01:06:39 +0000 (0:00:04.300) 0:00:20.874 ****** 2025-09-08 01:07:32.718203 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:07:32.718215 | orchestrator | 2025-09-08 01:07:32.718226 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-08 01:07:32.718236 | orchestrator | Monday 08 September 2025 01:06:42 +0000 (0:00:03.146) 0:00:24.020 ****** 2025-09-08 01:07:32.718247 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-08 01:07:32.718258 | orchestrator | 2025-09-08 01:07:32.718268 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-08 01:07:32.718279 | orchestrator | Monday 08 September 2025 01:06:47 +0000 (0:00:04.413) 0:00:28.434 ****** 2025-09-08 01:07:32.718290 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:32.718300 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:32.718311 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:32.718321 | orchestrator | 2025-09-08 01:07:32.718332 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-08 01:07:32.718343 | orchestrator | Monday 08 September 2025 01:06:47 +0000 (0:00:00.297) 0:00:28.732 ****** 2025-09-08 01:07:32.718358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.718398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.718419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.718430 | orchestrator | 2025-09-08 01:07:32.718442 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-08 01:07:32.718462 | orchestrator | Monday 08 September 2025 01:06:48 +0000 (0:00:00.924) 0:00:29.657 ****** 2025-09-08 01:07:32.718474 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:32.718484 | orchestrator | 2025-09-08 01:07:32.718496 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-08 01:07:32.718507 | orchestrator | Monday 08 September 2025 01:06:48 +0000 (0:00:00.139) 0:00:29.796 ****** 2025-09-08 01:07:32.718517 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:32.718528 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:32.718539 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:32.718549 | orchestrator | 2025-09-08 01:07:32.718560 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-08 01:07:32.718571 | orchestrator | Monday 08 September 2025 01:06:49 +0000 (0:00:00.541) 0:00:30.337 ****** 2025-09-08 01:07:32.718581 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:07:32.718592 | orchestrator | 2025-09-08 01:07:32.718603 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-08 01:07:32.718614 | orchestrator | Monday 08 September 2025 01:06:49 +0000 (0:00:00.518) 0:00:30.856 ****** 2025-09-08 01:07:32.718625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.718646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.718664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.718675 | orchestrator | 2025-09-08 01:07:32.718692 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-08 01:07:32.718732 | orchestrator | Monday 08 September 2025 01:06:51 +0000 (0:00:01.388) 0:00:32.244 ****** 2025-09-08 01:07:32.718746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.718757 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:32.718769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.718780 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:32.718798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.718810 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:32.718821 | orchestrator | 2025-09-08 01:07:32.718832 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-08 01:07:32.718842 | orchestrator | Monday 08 September 2025 01:06:51 +0000 (0:00:00.813) 0:00:33.058 ****** 2025-09-08 01:07:32.718858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.718878 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:32.718889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.718901 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:32.718912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.718924 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:32.718934 | orchestrator | 2025-09-08 01:07:32.718945 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-08 01:07:32.718956 | orchestrator | Monday 08 September 2025 01:06:52 +0000 (0:00:00.677) 0:00:33.735 ****** 2025-09-08 01:07:32.718973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.718985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719019 | orchestrator | 2025-09-08 01:07:32.719030 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-08 01:07:32.719041 | orchestrator | Monday 08 September 2025 01:06:53 +0000 (0:00:01.374) 0:00:35.109 ****** 2025-09-08 01:07:32.719052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719101 | orchestrator | 2025-09-08 01:07:32.719112 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-08 01:07:32.719123 | orchestrator | Monday 08 September 2025 01:06:56 +0000 (0:00:02.393) 0:00:37.503 ****** 2025-09-08 01:07:32.719134 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-08 01:07:32.719149 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-08 01:07:32.719167 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-08 01:07:32.719185 | orchestrator | 2025-09-08 01:07:32.719209 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-08 01:07:32.719224 | orchestrator | Monday 08 September 2025 01:06:58 +0000 (0:00:01.917) 0:00:39.420 ****** 2025-09-08 01:07:32.719235 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:07:32.719245 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:32.719256 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:07:32.719267 | orchestrator | 2025-09-08 01:07:32.719278 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-08 01:07:32.719289 | orchestrator | Monday 08 September 2025 01:06:59 +0000 (0:00:01.468) 0:00:40.889 ****** 2025-09-08 01:07:32.719300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.719311 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:32.719322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.719333 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:32.719353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:32.719372 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:32.719383 | orchestrator | 2025-09-08 01:07:32.719394 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-08 01:07:32.719405 | orchestrator | Monday 08 September 2025 01:07:00 +0000 (0:00:00.600) 0:00:41.489 ****** 2025-09-08 01:07:32.719421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:32.719456 | orchestrator | 2025-09-08 01:07:32.719466 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-08 01:07:32.719477 | orchestrator | Monday 08 September 2025 01:07:01 +0000 (0:00:01.561) 0:00:43.051 ****** 2025-09-08 01:07:32.719489 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:32.719500 | orchestrator | 2025-09-08 01:07:32.719510 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-08 01:07:32.719521 | orchestrator | Monday 08 September 2025 01:07:04 +0000 (0:00:02.665) 0:00:45.716 ****** 2025-09-08 01:07:32.719546 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:32.719557 | orchestrator | 2025-09-08 01:07:32.719568 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-08 01:07:32.719579 | orchestrator | Monday 08 September 2025 01:07:07 +0000 (0:00:02.568) 0:00:48.284 ****** 2025-09-08 01:07:32.719589 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:32.719600 | orchestrator | 2025-09-08 01:07:32.719611 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-08 01:07:32.719621 | orchestrator | Monday 08 September 2025 01:07:20 +0000 (0:00:13.382) 0:01:01.667 ****** 2025-09-08 01:07:32.719632 | orchestrator | 2025-09-08 01:07:32.719643 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-08 01:07:32.719653 | orchestrator | Monday 08 September 2025 01:07:20 +0000 (0:00:00.080) 0:01:01.748 ****** 2025-09-08 01:07:32.719664 | orchestrator | 2025-09-08 01:07:32.719682 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-08 01:07:32.719693 | orchestrator | Monday 08 September 2025 01:07:20 +0000 (0:00:00.065) 0:01:01.813 ****** 2025-09-08 01:07:32.719740 | orchestrator | 2025-09-08 01:07:32.719763 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-08 01:07:32.719781 | orchestrator | Monday 08 September 2025 01:07:20 +0000 (0:00:00.065) 0:01:01.879 ****** 2025-09-08 01:07:32.719793 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:32.719804 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:07:32.719815 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:07:32.719826 | orchestrator | 2025-09-08 01:07:32.719836 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:07:32.719848 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:07:32.719860 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:07:32.719871 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:07:32.719882 | orchestrator | 2025-09-08 01:07:32.719893 | orchestrator | 2025-09-08 01:07:32.719904 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:07:32.719921 | orchestrator | Monday 08 September 2025 01:07:31 +0000 (0:00:10.550) 0:01:12.430 ****** 2025-09-08 01:07:32.719932 | orchestrator | =============================================================================== 2025-09-08 01:07:32.719942 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.38s 2025-09-08 01:07:32.719953 | orchestrator | placement : Restart placement-api container ---------------------------- 10.55s 2025-09-08 01:07:32.719963 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.34s 2025-09-08 01:07:32.719974 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.41s 2025-09-08 01:07:32.719985 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.30s 2025-09-08 01:07:32.719995 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.78s 2025-09-08 01:07:32.720006 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.76s 2025-09-08 01:07:32.720017 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.15s 2025-09-08 01:07:32.720027 | orchestrator | placement : Creating placement databases -------------------------------- 2.67s 2025-09-08 01:07:32.720038 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.57s 2025-09-08 01:07:32.720049 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.39s 2025-09-08 01:07:32.720060 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.92s 2025-09-08 01:07:32.720071 | orchestrator | placement : Check placement containers ---------------------------------- 1.56s 2025-09-08 01:07:32.720089 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.47s 2025-09-08 01:07:32.720100 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.39s 2025-09-08 01:07:32.720111 | orchestrator | placement : Copying over config.json files for services ----------------- 1.37s 2025-09-08 01:07:32.720121 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.92s 2025-09-08 01:07:32.720132 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.81s 2025-09-08 01:07:32.720142 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.68s 2025-09-08 01:07:32.720153 | orchestrator | placement : Copying over existing policy file --------------------------- 0.60s 2025-09-08 01:07:32.720163 | orchestrator | 2025-09-08 01:07:32 | INFO  | Task d5124aab-41e7-473a-bcca-245d9e230c00 is in state SUCCESS 2025-09-08 01:07:32.720174 | orchestrator | 2025-09-08 01:07:32 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:32.720185 | orchestrator | 2025-09-08 01:07:32 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:32.721239 | orchestrator | 2025-09-08 01:07:32 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:32.723243 | orchestrator | 2025-09-08 01:07:32 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:32.723276 | orchestrator | 2025-09-08 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:35.783976 | orchestrator | 2025-09-08 01:07:35 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:35.784403 | orchestrator | 2025-09-08 01:07:35 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:35.785255 | orchestrator | 2025-09-08 01:07:35 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:35.786211 | orchestrator | 2025-09-08 01:07:35 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:35.786308 | orchestrator | 2025-09-08 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:38.806399 | orchestrator | 2025-09-08 01:07:38 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:38.806506 | orchestrator | 2025-09-08 01:07:38 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:38.807789 | orchestrator | 2025-09-08 01:07:38 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:38.808299 | orchestrator | 2025-09-08 01:07:38 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:38.808705 | orchestrator | 2025-09-08 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:41.826784 | orchestrator | 2025-09-08 01:07:41 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:41.827002 | orchestrator | 2025-09-08 01:07:41 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:41.827711 | orchestrator | 2025-09-08 01:07:41 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:41.828442 | orchestrator | 2025-09-08 01:07:41 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:41.828483 | orchestrator | 2025-09-08 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:44.872466 | orchestrator | 2025-09-08 01:07:44 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:44.874155 | orchestrator | 2025-09-08 01:07:44 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:44.876319 | orchestrator | 2025-09-08 01:07:44 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:44.878278 | orchestrator | 2025-09-08 01:07:44 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:44.878301 | orchestrator | 2025-09-08 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:47.912848 | orchestrator | 2025-09-08 01:07:47 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:47.913508 | orchestrator | 2025-09-08 01:07:47 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:47.914641 | orchestrator | 2025-09-08 01:07:47 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:47.915763 | orchestrator | 2025-09-08 01:07:47 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:47.915790 | orchestrator | 2025-09-08 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:50.967882 | orchestrator | 2025-09-08 01:07:50 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:50.972317 | orchestrator | 2025-09-08 01:07:50 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:50.975197 | orchestrator | 2025-09-08 01:07:50 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:50.977768 | orchestrator | 2025-09-08 01:07:50 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:50.977834 | orchestrator | 2025-09-08 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:54.020801 | orchestrator | 2025-09-08 01:07:54 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:54.022999 | orchestrator | 2025-09-08 01:07:54 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:54.024935 | orchestrator | 2025-09-08 01:07:54 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:54.027406 | orchestrator | 2025-09-08 01:07:54 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:54.027936 | orchestrator | 2025-09-08 01:07:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:57.069880 | orchestrator | 2025-09-08 01:07:57 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:07:57.071370 | orchestrator | 2025-09-08 01:07:57 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:07:57.074067 | orchestrator | 2025-09-08 01:07:57 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:07:57.076562 | orchestrator | 2025-09-08 01:07:57 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:07:57.076586 | orchestrator | 2025-09-08 01:07:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:00.122856 | orchestrator | 2025-09-08 01:08:00 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:00.123846 | orchestrator | 2025-09-08 01:08:00 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:00.125094 | orchestrator | 2025-09-08 01:08:00 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:00.126378 | orchestrator | 2025-09-08 01:08:00 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:00.126407 | orchestrator | 2025-09-08 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:03.177643 | orchestrator | 2025-09-08 01:08:03 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:03.177983 | orchestrator | 2025-09-08 01:08:03 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:03.179959 | orchestrator | 2025-09-08 01:08:03 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:03.179989 | orchestrator | 2025-09-08 01:08:03 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:03.180001 | orchestrator | 2025-09-08 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:06.235336 | orchestrator | 2025-09-08 01:08:06 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:06.235451 | orchestrator | 2025-09-08 01:08:06 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:06.235465 | orchestrator | 2025-09-08 01:08:06 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:06.235477 | orchestrator | 2025-09-08 01:08:06 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:06.235488 | orchestrator | 2025-09-08 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:09.250726 | orchestrator | 2025-09-08 01:08:09 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:09.252173 | orchestrator | 2025-09-08 01:08:09 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:09.252200 | orchestrator | 2025-09-08 01:08:09 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:09.252211 | orchestrator | 2025-09-08 01:08:09 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:09.252221 | orchestrator | 2025-09-08 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:12.296409 | orchestrator | 2025-09-08 01:08:12 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:12.297370 | orchestrator | 2025-09-08 01:08:12 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:12.298391 | orchestrator | 2025-09-08 01:08:12 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:12.298920 | orchestrator | 2025-09-08 01:08:12 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:12.298955 | orchestrator | 2025-09-08 01:08:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:15.345535 | orchestrator | 2025-09-08 01:08:15 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:15.347914 | orchestrator | 2025-09-08 01:08:15 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:15.350147 | orchestrator | 2025-09-08 01:08:15 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:15.351692 | orchestrator | 2025-09-08 01:08:15 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:15.351898 | orchestrator | 2025-09-08 01:08:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:18.398848 | orchestrator | 2025-09-08 01:08:18 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:18.400738 | orchestrator | 2025-09-08 01:08:18 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:18.402680 | orchestrator | 2025-09-08 01:08:18 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:18.404331 | orchestrator | 2025-09-08 01:08:18 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:18.404389 | orchestrator | 2025-09-08 01:08:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:21.454220 | orchestrator | 2025-09-08 01:08:21 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:21.455296 | orchestrator | 2025-09-08 01:08:21 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:21.456875 | orchestrator | 2025-09-08 01:08:21 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:21.457471 | orchestrator | 2025-09-08 01:08:21 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:21.457493 | orchestrator | 2025-09-08 01:08:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:24.504499 | orchestrator | 2025-09-08 01:08:24 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:24.506642 | orchestrator | 2025-09-08 01:08:24 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:24.508168 | orchestrator | 2025-09-08 01:08:24 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:24.509848 | orchestrator | 2025-09-08 01:08:24 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:24.509873 | orchestrator | 2025-09-08 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:27.545867 | orchestrator | 2025-09-08 01:08:27 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:27.545992 | orchestrator | 2025-09-08 01:08:27 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:27.546551 | orchestrator | 2025-09-08 01:08:27 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:27.549302 | orchestrator | 2025-09-08 01:08:27 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:27.549323 | orchestrator | 2025-09-08 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:30.584093 | orchestrator | 2025-09-08 01:08:30 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:30.584302 | orchestrator | 2025-09-08 01:08:30 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:30.586727 | orchestrator | 2025-09-08 01:08:30 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:30.587478 | orchestrator | 2025-09-08 01:08:30 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:30.587503 | orchestrator | 2025-09-08 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:33.611631 | orchestrator | 2025-09-08 01:08:33 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:33.611750 | orchestrator | 2025-09-08 01:08:33 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:33.612273 | orchestrator | 2025-09-08 01:08:33 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:33.612489 | orchestrator | 2025-09-08 01:08:33 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:33.612528 | orchestrator | 2025-09-08 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:36.636768 | orchestrator | 2025-09-08 01:08:36 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:36.637930 | orchestrator | 2025-09-08 01:08:36 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:36.638486 | orchestrator | 2025-09-08 01:08:36 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:36.639061 | orchestrator | 2025-09-08 01:08:36 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:36.639120 | orchestrator | 2025-09-08 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:39.662244 | orchestrator | 2025-09-08 01:08:39 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:39.662373 | orchestrator | 2025-09-08 01:08:39 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:39.663113 | orchestrator | 2025-09-08 01:08:39 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state STARTED 2025-09-08 01:08:39.663834 | orchestrator | 2025-09-08 01:08:39 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:39.663857 | orchestrator | 2025-09-08 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:42.693861 | orchestrator | 2025-09-08 01:08:42 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:42.694417 | orchestrator | 2025-09-08 01:08:42 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:42.695911 | orchestrator | 2025-09-08 01:08:42 | INFO  | Task 63baa224-53e8-4f23-bfe0-93f675a032b6 is in state SUCCESS 2025-09-08 01:08:42.698738 | orchestrator | 2025-09-08 01:08:42.698775 | orchestrator | 2025-09-08 01:08:42.698787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:08:42.698857 | orchestrator | 2025-09-08 01:08:42.698869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:08:42.698881 | orchestrator | Monday 08 September 2025 01:06:42 +0000 (0:00:00.285) 0:00:00.285 ****** 2025-09-08 01:08:42.698892 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:42.698904 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:08:42.698915 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:08:42.698926 | orchestrator | 2025-09-08 01:08:42.698937 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:08:42.698948 | orchestrator | Monday 08 September 2025 01:06:42 +0000 (0:00:00.291) 0:00:00.577 ****** 2025-09-08 01:08:42.698959 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-08 01:08:42.698970 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-08 01:08:42.698981 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-08 01:08:42.698992 | orchestrator | 2025-09-08 01:08:42.699003 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-08 01:08:42.699013 | orchestrator | 2025-09-08 01:08:42.699024 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-08 01:08:42.699035 | orchestrator | Monday 08 September 2025 01:06:43 +0000 (0:00:00.515) 0:00:01.093 ****** 2025-09-08 01:08:42.699066 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:42.699079 | orchestrator | 2025-09-08 01:08:42.699090 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-08 01:08:42.699100 | orchestrator | Monday 08 September 2025 01:06:43 +0000 (0:00:00.539) 0:00:01.633 ****** 2025-09-08 01:08:42.699112 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-08 01:08:42.699123 | orchestrator | 2025-09-08 01:08:42.699134 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-08 01:08:42.699144 | orchestrator | Monday 08 September 2025 01:06:47 +0000 (0:00:03.613) 0:00:05.247 ****** 2025-09-08 01:08:42.699155 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-08 01:08:42.699166 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-08 01:08:42.699207 | orchestrator | 2025-09-08 01:08:42.699218 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-08 01:08:42.699229 | orchestrator | Monday 08 September 2025 01:06:53 +0000 (0:00:06.133) 0:00:11.380 ****** 2025-09-08 01:08:42.699240 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:08:42.699251 | orchestrator | 2025-09-08 01:08:42.699262 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-08 01:08:42.699274 | orchestrator | Monday 08 September 2025 01:06:56 +0000 (0:00:03.519) 0:00:14.899 ****** 2025-09-08 01:08:42.699287 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:08:42.699301 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-08 01:08:42.699314 | orchestrator | 2025-09-08 01:08:42.699326 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-08 01:08:42.699339 | orchestrator | Monday 08 September 2025 01:07:00 +0000 (0:00:04.094) 0:00:18.994 ****** 2025-09-08 01:08:42.699352 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:08:42.699366 | orchestrator | 2025-09-08 01:08:42.699378 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-08 01:08:42.699391 | orchestrator | Monday 08 September 2025 01:07:04 +0000 (0:00:03.710) 0:00:22.705 ****** 2025-09-08 01:08:42.699403 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-08 01:08:42.699416 | orchestrator | 2025-09-08 01:08:42.699429 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-08 01:08:42.699442 | orchestrator | Monday 08 September 2025 01:07:09 +0000 (0:00:04.361) 0:00:27.067 ****** 2025-09-08 01:08:42.699455 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.699467 | orchestrator | 2025-09-08 01:08:42.699480 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-08 01:08:42.699493 | orchestrator | Monday 08 September 2025 01:07:12 +0000 (0:00:03.530) 0:00:30.597 ****** 2025-09-08 01:08:42.699506 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.699519 | orchestrator | 2025-09-08 01:08:42.699531 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-08 01:08:42.699544 | orchestrator | Monday 08 September 2025 01:07:16 +0000 (0:00:04.321) 0:00:34.919 ****** 2025-09-08 01:08:42.699559 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.699571 | orchestrator | 2025-09-08 01:08:42.699582 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-08 01:08:42.699593 | orchestrator | Monday 08 September 2025 01:07:20 +0000 (0:00:03.795) 0:00:38.714 ****** 2025-09-08 01:08:42.699621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.699644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.699664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.699676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.699689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.699707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.699719 | orchestrator | 2025-09-08 01:08:42.699730 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-08 01:08:42.699741 | orchestrator | Monday 08 September 2025 01:07:22 +0000 (0:00:01.589) 0:00:40.304 ****** 2025-09-08 01:08:42.699752 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:42.699763 | orchestrator | 2025-09-08 01:08:42.699774 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-08 01:08:42.699792 | orchestrator | Monday 08 September 2025 01:07:22 +0000 (0:00:00.138) 0:00:40.442 ****** 2025-09-08 01:08:42.699803 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:42.699832 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:42.699844 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:42.699854 | orchestrator | 2025-09-08 01:08:42.699865 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-08 01:08:42.699876 | orchestrator | Monday 08 September 2025 01:07:22 +0000 (0:00:00.482) 0:00:40.925 ****** 2025-09-08 01:08:42.699887 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:08:42.699898 | orchestrator | 2025-09-08 01:08:42.699908 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-08 01:08:42.699919 | orchestrator | Monday 08 September 2025 01:07:23 +0000 (0:00:00.833) 0:00:41.759 ****** 2025-09-08 01:08:42.699944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.699957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.699968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.699989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700037 | orchestrator | 2025-09-08 01:08:42.700048 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-08 01:08:42.700059 | orchestrator | Monday 08 September 2025 01:07:26 +0000 (0:00:02.704) 0:00:44.463 ****** 2025-09-08 01:08:42.700070 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:42.700081 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:08:42.700092 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:08:42.700103 | orchestrator | 2025-09-08 01:08:42.700114 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-08 01:08:42.700125 | orchestrator | Monday 08 September 2025 01:07:26 +0000 (0:00:00.302) 0:00:44.766 ****** 2025-09-08 01:08:42.700136 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:42.700147 | orchestrator | 2025-09-08 01:08:42.700158 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-08 01:08:42.700169 | orchestrator | Monday 08 September 2025 01:07:27 +0000 (0:00:00.691) 0:00:45.458 ****** 2025-09-08 01:08:42.700180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.700198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.700222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.700234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700268 | orchestrator | 2025-09-08 01:08:42.700280 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-08 01:08:42.700290 | orchestrator | Monday 08 September 2025 01:07:29 +0000 (0:00:02.232) 0:00:47.690 ****** 2025-09-08 01:08:42.700316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.700328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.700345 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:42.700356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.700368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.700379 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:42.700390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.700416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.700428 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:42.700439 | orchestrator | 2025-09-08 01:08:42.700450 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-08 01:08:42.700461 | orchestrator | Monday 08 September 2025 01:07:30 +0000 (0:00:00.683) 0:00:48.373 ****** 2025-09-08 01:08:42.700477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.700489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.700501 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:42.700512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.700530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.700541 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:42.700560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.700578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.700589 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:42.700600 | orchestrator | 2025-09-08 01:08:42.700611 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-08 01:08:42.700622 | orchestrator | Monday 08 September 2025 01:07:31 +0000 (0:00:01.078) 0:00:49.452 ****** 2025-09-08 01:08:42.700633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.700645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.700880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.700898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.700939 | orchestrator | 2025-09-08 01:08:42.700950 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-08 01:08:42.700961 | orchestrator | Monday 08 September 2025 01:07:33 +0000 (0:00:02.339) 0:00:51.792 ****** 2025-09-08 01:08:42.700981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.701000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.701017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.701044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.701056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.701088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.701100 | orchestrator | 2025-09-08 01:08:42.701111 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-08 01:08:42.701122 | orchestrator | Monday 08 September 2025 01:07:40 +0000 (0:00:06.426) 0:00:58.219 ****** 2025-09-08 01:08:42.701139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.701156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.701167 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:42.701179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.701190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.701208 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:42.701219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:42.701235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:42.701247 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:42.701258 | orchestrator | 2025-09-08 01:08:42.701269 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-08 01:08:42.701280 | orchestrator | Monday 08 September 2025 01:07:41 +0000 (0:00:00.847) 0:00:59.066 ****** 2025-09-08 01:08:42.701296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.701308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.701326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:42.701337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.701356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.701373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:42.701384 | orchestrator | 2025-09-08 01:08:42.701395 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-08 01:08:42.701406 | orchestrator | Monday 08 September 2025 01:07:43 +0000 (0:00:02.286) 0:01:01.353 ****** 2025-09-08 01:08:42.701417 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:42.701428 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:42.701445 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:42.701456 | orchestrator | 2025-09-08 01:08:42.701467 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-08 01:08:42.701478 | orchestrator | Monday 08 September 2025 01:07:43 +0000 (0:00:00.278) 0:01:01.631 ****** 2025-09-08 01:08:42.701489 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.701502 | orchestrator | 2025-09-08 01:08:42.701515 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-08 01:08:42.701528 | orchestrator | Monday 08 September 2025 01:07:45 +0000 (0:00:02.074) 0:01:03.706 ****** 2025-09-08 01:08:42.701541 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.701554 | orchestrator | 2025-09-08 01:08:42.701567 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-08 01:08:42.701580 | orchestrator | Monday 08 September 2025 01:07:47 +0000 (0:00:02.165) 0:01:05.871 ****** 2025-09-08 01:08:42.701593 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.701605 | orchestrator | 2025-09-08 01:08:42.701617 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-08 01:08:42.701631 | orchestrator | Monday 08 September 2025 01:08:03 +0000 (0:00:15.507) 0:01:21.379 ****** 2025-09-08 01:08:42.701643 | orchestrator | 2025-09-08 01:08:42.701655 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-08 01:08:42.701668 | orchestrator | Monday 08 September 2025 01:08:03 +0000 (0:00:00.084) 0:01:21.464 ****** 2025-09-08 01:08:42.701681 | orchestrator | 2025-09-08 01:08:42.701694 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-08 01:08:42.701706 | orchestrator | Monday 08 September 2025 01:08:03 +0000 (0:00:00.090) 0:01:21.555 ****** 2025-09-08 01:08:42.701719 | orchestrator | 2025-09-08 01:08:42.701731 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-08 01:08:42.701744 | orchestrator | Monday 08 September 2025 01:08:03 +0000 (0:00:00.069) 0:01:21.624 ****** 2025-09-08 01:08:42.701756 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.701769 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:42.701781 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:42.701794 | orchestrator | 2025-09-08 01:08:42.701823 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-08 01:08:42.701837 | orchestrator | Monday 08 September 2025 01:08:25 +0000 (0:00:22.284) 0:01:43.909 ****** 2025-09-08 01:08:42.701850 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:42.701860 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:42.701871 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:42.701881 | orchestrator | 2025-09-08 01:08:42.701892 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:08:42.701903 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:08:42.701915 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:08:42.701926 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:08:42.701937 | orchestrator | 2025-09-08 01:08:42.701947 | orchestrator | 2025-09-08 01:08:42.701958 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:08:42.701969 | orchestrator | Monday 08 September 2025 01:08:40 +0000 (0:00:14.582) 0:01:58.491 ****** 2025-09-08 01:08:42.701979 | orchestrator | =============================================================================== 2025-09-08 01:08:42.701990 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.28s 2025-09-08 01:08:42.702006 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.51s 2025-09-08 01:08:42.702067 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.58s 2025-09-08 01:08:42.702090 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.43s 2025-09-08 01:08:42.702101 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.13s 2025-09-08 01:08:42.702112 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.36s 2025-09-08 01:08:42.702122 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.32s 2025-09-08 01:08:42.702133 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.09s 2025-09-08 01:08:42.702143 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.80s 2025-09-08 01:08:42.702154 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.71s 2025-09-08 01:08:42.702165 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.61s 2025-09-08 01:08:42.702175 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.53s 2025-09-08 01:08:42.702186 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.52s 2025-09-08 01:08:42.702202 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.70s 2025-09-08 01:08:42.702213 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.34s 2025-09-08 01:08:42.702224 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.29s 2025-09-08 01:08:42.702234 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.23s 2025-09-08 01:08:42.702245 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.17s 2025-09-08 01:08:42.702255 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.07s 2025-09-08 01:08:42.702266 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.59s 2025-09-08 01:08:42.702277 | orchestrator | 2025-09-08 01:08:42 | INFO  | Task 26e8ac65-7d02-417a-a073-0b1f938f29f4 is in state STARTED 2025-09-08 01:08:42.702288 | orchestrator | 2025-09-08 01:08:42 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:42.702299 | orchestrator | 2025-09-08 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:45.745629 | orchestrator | 2025-09-08 01:08:45 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:45.747290 | orchestrator | 2025-09-08 01:08:45 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:45.749502 | orchestrator | 2025-09-08 01:08:45 | INFO  | Task 26e8ac65-7d02-417a-a073-0b1f938f29f4 is in state STARTED 2025-09-08 01:08:45.751543 | orchestrator | 2025-09-08 01:08:45 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:45.751795 | orchestrator | 2025-09-08 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:48.806217 | orchestrator | 2025-09-08 01:08:48 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:08:48.806735 | orchestrator | 2025-09-08 01:08:48 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:48.808240 | orchestrator | 2025-09-08 01:08:48 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:48.809690 | orchestrator | 2025-09-08 01:08:48 | INFO  | Task 26e8ac65-7d02-417a-a073-0b1f938f29f4 is in state SUCCESS 2025-09-08 01:08:48.811116 | orchestrator | 2025-09-08 01:08:48 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:48.811184 | orchestrator | 2025-09-08 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:51.864927 | orchestrator | 2025-09-08 01:08:51 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:08:51.866701 | orchestrator | 2025-09-08 01:08:51 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:51.868092 | orchestrator | 2025-09-08 01:08:51 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:51.869793 | orchestrator | 2025-09-08 01:08:51 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:51.869930 | orchestrator | 2025-09-08 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:54.917547 | orchestrator | 2025-09-08 01:08:54 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:08:54.919522 | orchestrator | 2025-09-08 01:08:54 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:54.922112 | orchestrator | 2025-09-08 01:08:54 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:54.924010 | orchestrator | 2025-09-08 01:08:54 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:54.924308 | orchestrator | 2025-09-08 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:57.974748 | orchestrator | 2025-09-08 01:08:57 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:08:57.975550 | orchestrator | 2025-09-08 01:08:57 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:08:57.978209 | orchestrator | 2025-09-08 01:08:57 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:08:57.980822 | orchestrator | 2025-09-08 01:08:57 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:08:57.981240 | orchestrator | 2025-09-08 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:01.023055 | orchestrator | 2025-09-08 01:09:01 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:01.024461 | orchestrator | 2025-09-08 01:09:01 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:01.026754 | orchestrator | 2025-09-08 01:09:01 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:09:01.029561 | orchestrator | 2025-09-08 01:09:01 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:01.029586 | orchestrator | 2025-09-08 01:09:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:04.078623 | orchestrator | 2025-09-08 01:09:04 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:04.080921 | orchestrator | 2025-09-08 01:09:04 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:04.083173 | orchestrator | 2025-09-08 01:09:04 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:09:04.085791 | orchestrator | 2025-09-08 01:09:04 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:04.085813 | orchestrator | 2025-09-08 01:09:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:07.132908 | orchestrator | 2025-09-08 01:09:07 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:07.135642 | orchestrator | 2025-09-08 01:09:07 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:07.191453 | orchestrator | 2025-09-08 01:09:07 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:09:07.191493 | orchestrator | 2025-09-08 01:09:07 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:07.191506 | orchestrator | 2025-09-08 01:09:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:10.186544 | orchestrator | 2025-09-08 01:09:10 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:10.187027 | orchestrator | 2025-09-08 01:09:10 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:10.188413 | orchestrator | 2025-09-08 01:09:10 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:09:10.190463 | orchestrator | 2025-09-08 01:09:10 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:10.191345 | orchestrator | 2025-09-08 01:09:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:13.232252 | orchestrator | 2025-09-08 01:09:13 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:13.234255 | orchestrator | 2025-09-08 01:09:13 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:13.236219 | orchestrator | 2025-09-08 01:09:13 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state STARTED 2025-09-08 01:09:13.237549 | orchestrator | 2025-09-08 01:09:13 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:13.237801 | orchestrator | 2025-09-08 01:09:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:16.278775 | orchestrator | 2025-09-08 01:09:16 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:16.280024 | orchestrator | 2025-09-08 01:09:16 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:16.284297 | orchestrator | 2025-09-08 01:09:16 | INFO  | Task 7705de78-ad75-44b9-9325-9c376018107f is in state SUCCESS 2025-09-08 01:09:16.286981 | orchestrator | 2025-09-08 01:09:16.287068 | orchestrator | 2025-09-08 01:09:16.287086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:09:16.287099 | orchestrator | 2025-09-08 01:09:16.287110 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:09:16.287121 | orchestrator | Monday 08 September 2025 01:08:44 +0000 (0:00:00.178) 0:00:00.179 ****** 2025-09-08 01:09:16.287133 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.287145 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:16.287156 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:16.287166 | orchestrator | 2025-09-08 01:09:16.287177 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:09:16.287188 | orchestrator | Monday 08 September 2025 01:08:44 +0000 (0:00:00.306) 0:00:00.485 ****** 2025-09-08 01:09:16.287199 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-08 01:09:16.287211 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-08 01:09:16.287263 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-08 01:09:16.287274 | orchestrator | 2025-09-08 01:09:16.287285 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-08 01:09:16.287295 | orchestrator | 2025-09-08 01:09:16.287306 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-08 01:09:16.287316 | orchestrator | Monday 08 September 2025 01:08:45 +0000 (0:00:00.642) 0:00:01.127 ****** 2025-09-08 01:09:16.287327 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.287338 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:16.287348 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:16.287359 | orchestrator | 2025-09-08 01:09:16.287370 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:09:16.287397 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:09:16.287409 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:09:16.287420 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:09:16.287455 | orchestrator | 2025-09-08 01:09:16.287492 | orchestrator | 2025-09-08 01:09:16.287503 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:09:16.287514 | orchestrator | Monday 08 September 2025 01:08:45 +0000 (0:00:00.656) 0:00:01.783 ****** 2025-09-08 01:09:16.287524 | orchestrator | =============================================================================== 2025-09-08 01:09:16.287575 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.66s 2025-09-08 01:09:16.287586 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-09-08 01:09:16.287597 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-08 01:09:16.287607 | orchestrator | 2025-09-08 01:09:16.287618 | orchestrator | 2025-09-08 01:09:16.287628 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:09:16.287639 | orchestrator | 2025-09-08 01:09:16.287650 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-08 01:09:16.287661 | orchestrator | Monday 08 September 2025 00:59:44 +0000 (0:00:00.219) 0:00:00.219 ****** 2025-09-08 01:09:16.287672 | orchestrator | changed: [testbed-manager] 2025-09-08 01:09:16.287684 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.287694 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.287705 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.287716 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.287726 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.287736 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.287747 | orchestrator | 2025-09-08 01:09:16.287757 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:09:16.287768 | orchestrator | Monday 08 September 2025 00:59:45 +0000 (0:00:00.674) 0:00:00.893 ****** 2025-09-08 01:09:16.287779 | orchestrator | changed: [testbed-manager] 2025-09-08 01:09:16.287789 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.287800 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.287810 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.287821 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.287831 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.287842 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.287887 | orchestrator | 2025-09-08 01:09:16.287899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:09:16.287910 | orchestrator | Monday 08 September 2025 00:59:46 +0000 (0:00:00.713) 0:00:01.607 ****** 2025-09-08 01:09:16.287921 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-08 01:09:16.287932 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-08 01:09:16.287942 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-08 01:09:16.287953 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-08 01:09:16.287964 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-08 01:09:16.287974 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-08 01:09:16.287985 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-08 01:09:16.287995 | orchestrator | 2025-09-08 01:09:16.288006 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-08 01:09:16.288017 | orchestrator | 2025-09-08 01:09:16.288027 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-08 01:09:16.288039 | orchestrator | Monday 08 September 2025 00:59:47 +0000 (0:00:00.866) 0:00:02.474 ****** 2025-09-08 01:09:16.288050 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:16.288060 | orchestrator | 2025-09-08 01:09:16.288071 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-08 01:09:16.288082 | orchestrator | Monday 08 September 2025 00:59:47 +0000 (0:00:00.657) 0:00:03.131 ****** 2025-09-08 01:09:16.288102 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-08 01:09:16.288128 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-08 01:09:16.288140 | orchestrator | 2025-09-08 01:09:16.288151 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-08 01:09:16.288161 | orchestrator | Monday 08 September 2025 00:59:51 +0000 (0:00:04.009) 0:00:07.140 ****** 2025-09-08 01:09:16.288172 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 01:09:16.288183 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 01:09:16.288193 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.288204 | orchestrator | 2025-09-08 01:09:16.288214 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-08 01:09:16.288225 | orchestrator | Monday 08 September 2025 00:59:55 +0000 (0:00:04.078) 0:00:11.219 ****** 2025-09-08 01:09:16.288236 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.288246 | orchestrator | 2025-09-08 01:09:16.288257 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-08 01:09:16.288267 | orchestrator | Monday 08 September 2025 00:59:56 +0000 (0:00:00.675) 0:00:11.894 ****** 2025-09-08 01:09:16.288278 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.288289 | orchestrator | 2025-09-08 01:09:16.288299 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-08 01:09:16.288310 | orchestrator | Monday 08 September 2025 00:59:58 +0000 (0:00:01.383) 0:00:13.278 ****** 2025-09-08 01:09:16.288321 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.288331 | orchestrator | 2025-09-08 01:09:16.288342 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:09:16.288353 | orchestrator | Monday 08 September 2025 01:00:01 +0000 (0:00:03.324) 0:00:16.602 ****** 2025-09-08 01:09:16.288371 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.288382 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.288392 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.288403 | orchestrator | 2025-09-08 01:09:16.288413 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-08 01:09:16.288424 | orchestrator | Monday 08 September 2025 01:00:01 +0000 (0:00:00.375) 0:00:16.978 ****** 2025-09-08 01:09:16.288434 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.288445 | orchestrator | 2025-09-08 01:09:16.288456 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-08 01:09:16.288467 | orchestrator | Monday 08 September 2025 01:00:29 +0000 (0:00:27.905) 0:00:44.883 ****** 2025-09-08 01:09:16.288477 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.288488 | orchestrator | 2025-09-08 01:09:16.288499 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-08 01:09:16.288509 | orchestrator | Monday 08 September 2025 01:00:42 +0000 (0:00:12.589) 0:00:57.473 ****** 2025-09-08 01:09:16.288520 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.288530 | orchestrator | 2025-09-08 01:09:16.288541 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-08 01:09:16.288552 | orchestrator | Monday 08 September 2025 01:00:53 +0000 (0:00:11.729) 0:01:09.202 ****** 2025-09-08 01:09:16.288563 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.288573 | orchestrator | 2025-09-08 01:09:16.288584 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-08 01:09:16.288595 | orchestrator | Monday 08 September 2025 01:00:55 +0000 (0:00:01.096) 0:01:10.299 ****** 2025-09-08 01:09:16.288605 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.288616 | orchestrator | 2025-09-08 01:09:16.288627 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:09:16.288637 | orchestrator | Monday 08 September 2025 01:00:55 +0000 (0:00:00.471) 0:01:10.770 ****** 2025-09-08 01:09:16.288648 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-08 01:09:16.288659 | orchestrator | 2025-09-08 01:09:16.288669 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-08 01:09:16.288688 | orchestrator | Monday 08 September 2025 01:00:56 +0000 (0:00:00.826) 0:01:11.596 ****** 2025-09-08 01:09:16.288699 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.288709 | orchestrator | 2025-09-08 01:09:16.288720 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-08 01:09:16.288730 | orchestrator | Monday 08 September 2025 01:01:14 +0000 (0:00:18.509) 0:01:30.106 ****** 2025-09-08 01:09:16.288741 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.288751 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.288762 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.288773 | orchestrator | 2025-09-08 01:09:16.288783 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-08 01:09:16.288794 | orchestrator | 2025-09-08 01:09:16.288805 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-08 01:09:16.288816 | orchestrator | Monday 08 September 2025 01:01:15 +0000 (0:00:00.313) 0:01:30.420 ****** 2025-09-08 01:09:16.288826 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:16.288837 | orchestrator | 2025-09-08 01:09:16.288847 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-08 01:09:16.288910 | orchestrator | Monday 08 September 2025 01:01:15 +0000 (0:00:00.618) 0:01:31.038 ****** 2025-09-08 01:09:16.288921 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.288932 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.288943 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.288953 | orchestrator | 2025-09-08 01:09:16.288964 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-08 01:09:16.288975 | orchestrator | Monday 08 September 2025 01:01:17 +0000 (0:00:02.173) 0:01:33.212 ****** 2025-09-08 01:09:16.288985 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.288996 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289007 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.289017 | orchestrator | 2025-09-08 01:09:16.289028 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-08 01:09:16.289039 | orchestrator | Monday 08 September 2025 01:01:20 +0000 (0:00:02.216) 0:01:35.428 ****** 2025-09-08 01:09:16.289050 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.289060 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289077 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289088 | orchestrator | 2025-09-08 01:09:16.289099 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-08 01:09:16.289110 | orchestrator | Monday 08 September 2025 01:01:20 +0000 (0:00:00.383) 0:01:35.811 ****** 2025-09-08 01:09:16.289121 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 01:09:16.289132 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289143 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 01:09:16.289154 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289165 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-08 01:09:16.289175 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-08 01:09:16.289186 | orchestrator | 2025-09-08 01:09:16.289197 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-08 01:09:16.289208 | orchestrator | Monday 08 September 2025 01:01:28 +0000 (0:00:07.748) 0:01:43.560 ****** 2025-09-08 01:09:16.289219 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.289230 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289241 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289251 | orchestrator | 2025-09-08 01:09:16.289262 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-08 01:09:16.289273 | orchestrator | Monday 08 September 2025 01:01:28 +0000 (0:00:00.336) 0:01:43.897 ****** 2025-09-08 01:09:16.289284 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-08 01:09:16.289295 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.289317 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 01:09:16.289334 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289346 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 01:09:16.289356 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289367 | orchestrator | 2025-09-08 01:09:16.289378 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-08 01:09:16.289389 | orchestrator | Monday 08 September 2025 01:01:29 +0000 (0:00:00.796) 0:01:44.693 ****** 2025-09-08 01:09:16.289400 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289410 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289421 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.289432 | orchestrator | 2025-09-08 01:09:16.289443 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-08 01:09:16.289454 | orchestrator | Monday 08 September 2025 01:01:29 +0000 (0:00:00.476) 0:01:45.170 ****** 2025-09-08 01:09:16.289464 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289475 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289486 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.289497 | orchestrator | 2025-09-08 01:09:16.289507 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-08 01:09:16.289518 | orchestrator | Monday 08 September 2025 01:01:30 +0000 (0:00:00.969) 0:01:46.140 ****** 2025-09-08 01:09:16.289529 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289539 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289550 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.289561 | orchestrator | 2025-09-08 01:09:16.289572 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-08 01:09:16.289583 | orchestrator | Monday 08 September 2025 01:01:32 +0000 (0:00:02.054) 0:01:48.194 ****** 2025-09-08 01:09:16.289593 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289604 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289615 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.289626 | orchestrator | 2025-09-08 01:09:16.289636 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-08 01:09:16.289647 | orchestrator | Monday 08 September 2025 01:01:53 +0000 (0:00:20.672) 0:02:08.866 ****** 2025-09-08 01:09:16.289658 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289669 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289680 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.289690 | orchestrator | 2025-09-08 01:09:16.289701 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-08 01:09:16.289712 | orchestrator | Monday 08 September 2025 01:02:05 +0000 (0:00:11.510) 0:02:20.377 ****** 2025-09-08 01:09:16.289723 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.289734 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289744 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289755 | orchestrator | 2025-09-08 01:09:16.289766 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-08 01:09:16.289776 | orchestrator | Monday 08 September 2025 01:02:06 +0000 (0:00:01.167) 0:02:21.545 ****** 2025-09-08 01:09:16.289787 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.289798 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.289808 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.289819 | orchestrator | 2025-09-08 01:09:16.289830 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-08 01:09:16.289840 | orchestrator | Monday 08 September 2025 01:02:17 +0000 (0:00:11.026) 0:02:32.572 ****** 2025-09-08 01:09:16.290096 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.290246 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.290261 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.290274 | orchestrator | 2025-09-08 01:09:16.290287 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-08 01:09:16.290301 | orchestrator | Monday 08 September 2025 01:02:18 +0000 (0:00:01.466) 0:02:34.038 ****** 2025-09-08 01:09:16.290346 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.290357 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.290368 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.290379 | orchestrator | 2025-09-08 01:09:16.290391 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-08 01:09:16.290402 | orchestrator | 2025-09-08 01:09:16.290413 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:09:16.290424 | orchestrator | Monday 08 September 2025 01:02:19 +0000 (0:00:00.554) 0:02:34.593 ****** 2025-09-08 01:09:16.290435 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:16.290448 | orchestrator | 2025-09-08 01:09:16.290494 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-08 01:09:16.290506 | orchestrator | Monday 08 September 2025 01:02:19 +0000 (0:00:00.530) 0:02:35.123 ****** 2025-09-08 01:09:16.290518 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-08 01:09:16.290529 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-08 01:09:16.290540 | orchestrator | 2025-09-08 01:09:16.290551 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-08 01:09:16.290562 | orchestrator | Monday 08 September 2025 01:02:23 +0000 (0:00:03.501) 0:02:38.625 ****** 2025-09-08 01:09:16.290574 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-08 01:09:16.290587 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-08 01:09:16.290599 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-08 01:09:16.290610 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-08 01:09:16.290621 | orchestrator | 2025-09-08 01:09:16.290632 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-08 01:09:16.290643 | orchestrator | Monday 08 September 2025 01:02:29 +0000 (0:00:06.140) 0:02:44.765 ****** 2025-09-08 01:09:16.290653 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:09:16.290664 | orchestrator | 2025-09-08 01:09:16.290692 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-08 01:09:16.290703 | orchestrator | Monday 08 September 2025 01:02:32 +0000 (0:00:03.244) 0:02:48.010 ****** 2025-09-08 01:09:16.290714 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:09:16.290725 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-08 01:09:16.290735 | orchestrator | 2025-09-08 01:09:16.290746 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-08 01:09:16.290757 | orchestrator | Monday 08 September 2025 01:02:36 +0000 (0:00:03.840) 0:02:51.850 ****** 2025-09-08 01:09:16.290768 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:09:16.290779 | orchestrator | 2025-09-08 01:09:16.290790 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-08 01:09:16.290801 | orchestrator | Monday 08 September 2025 01:02:40 +0000 (0:00:03.406) 0:02:55.257 ****** 2025-09-08 01:09:16.290811 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-08 01:09:16.290822 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-08 01:09:16.290832 | orchestrator | 2025-09-08 01:09:16.290843 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-08 01:09:16.290894 | orchestrator | Monday 08 September 2025 01:02:47 +0000 (0:00:07.511) 0:03:02.768 ****** 2025-09-08 01:09:16.290912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.290950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.290971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.290984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.291005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.291016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.291028 | orchestrator | 2025-09-08 01:09:16.291039 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-08 01:09:16.291050 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:01.592) 0:03:04.361 ****** 2025-09-08 01:09:16.291060 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.291071 | orchestrator | 2025-09-08 01:09:16.291082 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-08 01:09:16.291093 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:00.202) 0:03:04.564 ****** 2025-09-08 01:09:16.291103 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.291114 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.291125 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.291136 | orchestrator | 2025-09-08 01:09:16.291147 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-08 01:09:16.291158 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:00.323) 0:03:04.887 ****** 2025-09-08 01:09:16.291176 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:09:16.291186 | orchestrator | 2025-09-08 01:09:16.291197 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-08 01:09:16.291208 | orchestrator | Monday 08 September 2025 01:02:50 +0000 (0:00:00.938) 0:03:05.826 ****** 2025-09-08 01:09:16.291218 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.291229 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.291240 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.291250 | orchestrator | 2025-09-08 01:09:16.291261 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:09:16.291272 | orchestrator | Monday 08 September 2025 01:02:50 +0000 (0:00:00.314) 0:03:06.140 ****** 2025-09-08 01:09:16.291282 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:16.291293 | orchestrator | 2025-09-08 01:09:16.291304 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-08 01:09:16.291315 | orchestrator | Monday 08 September 2025 01:02:51 +0000 (0:00:00.556) 0:03:06.697 ****** 2025-09-08 01:09:16.291332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.291352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.291364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.291387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.291412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.291430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.291442 | orchestrator | 2025-09-08 01:09:16.291453 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-08 01:09:16.291464 | orchestrator | Monday 08 September 2025 01:02:55 +0000 (0:00:03.667) 0:03:10.364 ****** 2025-09-08 01:09:16.291476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.291488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.291499 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.291524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.291543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.291555 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.291567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.291579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.291590 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.291600 | orchestrator | 2025-09-08 01:09:16.291611 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-08 01:09:16.291622 | orchestrator | Monday 08 September 2025 01:02:56 +0000 (0:00:01.807) 0:03:12.171 ****** 2025-09-08 01:09:16.291645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.291668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.291679 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.291691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.291703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.291714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.292045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.292082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.292094 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.292104 | orchestrator | 2025-09-08 01:09:16.292115 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-08 01:09:16.292126 | orchestrator | Monday 08 September 2025 01:02:57 +0000 (0:00:00.980) 0:03:13.152 ****** 2025-09-08 01:09:16.292138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292230 | orchestrator | 2025-09-08 01:09:16.292241 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-08 01:09:16.292252 | orchestrator | Monday 08 September 2025 01:03:01 +0000 (0:00:03.339) 0:03:16.492 ****** 2025-09-08 01:09:16.292269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292365 | orchestrator | 2025-09-08 01:09:16.292376 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-08 01:09:16.292387 | orchestrator | Monday 08 September 2025 01:03:09 +0000 (0:00:08.317) 0:03:24.810 ****** 2025-09-08 01:09:16.292404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.292416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.292427 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.292439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.292451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.292468 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.292487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:09:16.292511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.292523 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.292533 | orchestrator | 2025-09-08 01:09:16.292544 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-08 01:09:16.292603 | orchestrator | Monday 08 September 2025 01:03:10 +0000 (0:00:00.518) 0:03:25.329 ****** 2025-09-08 01:09:16.292615 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.292626 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.292637 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.292648 | orchestrator | 2025-09-08 01:09:16.292658 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-08 01:09:16.292669 | orchestrator | Monday 08 September 2025 01:03:12 +0000 (0:00:02.058) 0:03:27.387 ****** 2025-09-08 01:09:16.292680 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.292690 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.292701 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.292712 | orchestrator | 2025-09-08 01:09:16.292723 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-08 01:09:16.292733 | orchestrator | Monday 08 September 2025 01:03:12 +0000 (0:00:00.567) 0:03:27.955 ****** 2025-09-08 01:09:16.292745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:09:16.292844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.292896 | orchestrator | 2025-09-08 01:09:16.292907 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-08 01:09:16.292918 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:02.743) 0:03:30.699 ****** 2025-09-08 01:09:16.292928 | orchestrator | 2025-09-08 01:09:16.292939 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-08 01:09:16.292956 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:00.107) 0:03:30.807 ****** 2025-09-08 01:09:16.292967 | orchestrator | 2025-09-08 01:09:16.292978 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-08 01:09:16.292988 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:00.113) 0:03:30.920 ****** 2025-09-08 01:09:16.292999 | orchestrator | 2025-09-08 01:09:16.293010 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-08 01:09:16.293020 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:00.140) 0:03:31.060 ****** 2025-09-08 01:09:16.293031 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.293042 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.293053 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.293063 | orchestrator | 2025-09-08 01:09:16.293074 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-08 01:09:16.293084 | orchestrator | Monday 08 September 2025 01:03:39 +0000 (0:00:24.056) 0:03:55.117 ****** 2025-09-08 01:09:16.293095 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.293106 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.293117 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.293127 | orchestrator | 2025-09-08 01:09:16.293138 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-08 01:09:16.293148 | orchestrator | 2025-09-08 01:09:16.293159 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:09:16.293170 | orchestrator | Monday 08 September 2025 01:03:46 +0000 (0:00:07.094) 0:04:02.211 ****** 2025-09-08 01:09:16.293181 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:16.293193 | orchestrator | 2025-09-08 01:09:16.293208 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:09:16.293219 | orchestrator | Monday 08 September 2025 01:03:48 +0000 (0:00:01.849) 0:04:04.061 ****** 2025-09-08 01:09:16.293244 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.293256 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.293266 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.293277 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.293288 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.293310 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.293321 | orchestrator | 2025-09-08 01:09:16.293332 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-08 01:09:16.293342 | orchestrator | Monday 08 September 2025 01:03:49 +0000 (0:00:00.624) 0:04:04.685 ****** 2025-09-08 01:09:16.293353 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.293364 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.293374 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.293385 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:09:16.293404 | orchestrator | 2025-09-08 01:09:16.293415 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-08 01:09:16.293425 | orchestrator | Monday 08 September 2025 01:03:50 +0000 (0:00:01.265) 0:04:05.950 ****** 2025-09-08 01:09:16.293437 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-08 01:09:16.293447 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-08 01:09:16.293458 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-08 01:09:16.293469 | orchestrator | 2025-09-08 01:09:16.293479 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-08 01:09:16.293490 | orchestrator | Monday 08 September 2025 01:03:51 +0000 (0:00:00.719) 0:04:06.669 ****** 2025-09-08 01:09:16.293501 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-08 01:09:16.293641 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-08 01:09:16.293652 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-08 01:09:16.293663 | orchestrator | 2025-09-08 01:09:16.293674 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-08 01:09:16.293685 | orchestrator | Monday 08 September 2025 01:03:52 +0000 (0:00:01.530) 0:04:08.200 ****** 2025-09-08 01:09:16.293696 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-08 01:09:16.293707 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.293717 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-08 01:09:16.293728 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.293739 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-08 01:09:16.293750 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.293761 | orchestrator | 2025-09-08 01:09:16.293772 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-08 01:09:16.293783 | orchestrator | Monday 08 September 2025 01:03:53 +0000 (0:00:00.826) 0:04:09.026 ****** 2025-09-08 01:09:16.293794 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 01:09:16.293804 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 01:09:16.293815 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.293826 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 01:09:16.293837 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 01:09:16.293848 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.293905 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 01:09:16.293917 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-08 01:09:16.293927 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 01:09:16.293938 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.293949 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-08 01:09:16.293960 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-08 01:09:16.294981 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-08 01:09:16.295015 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-08 01:09:16.295026 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-08 01:09:16.295037 | orchestrator | 2025-09-08 01:09:16.295048 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-08 01:09:16.295059 | orchestrator | Monday 08 September 2025 01:03:54 +0000 (0:00:01.109) 0:04:10.135 ****** 2025-09-08 01:09:16.295070 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.295080 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.295091 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.295102 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.295126 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.295137 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.295147 | orchestrator | 2025-09-08 01:09:16.295158 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-08 01:09:16.295169 | orchestrator | Monday 08 September 2025 01:03:57 +0000 (0:00:02.202) 0:04:12.338 ****** 2025-09-08 01:09:16.295179 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.295190 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.295201 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.295211 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.295222 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.295231 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.295241 | orchestrator | 2025-09-08 01:09:16.295251 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-08 01:09:16.295268 | orchestrator | Monday 08 September 2025 01:03:59 +0000 (0:00:02.306) 0:04:14.648 ****** 2025-09-08 01:09:16.295280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295322 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295479 | orchestrator | 2025-09-08 01:09:16.295489 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:09:16.295502 | orchestrator | Monday 08 September 2025 01:04:04 +0000 (0:00:05.022) 0:04:19.671 ****** 2025-09-08 01:09:16.295514 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:16.295527 | orchestrator | 2025-09-08 01:09:16.295538 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-08 01:09:16.295556 | orchestrator | Monday 08 September 2025 01:04:06 +0000 (0:00:02.011) 0:04:21.682 ****** 2025-09-08 01:09:16.295574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.295802 | orchestrator | 2025-09-08 01:09:16.295813 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-08 01:09:16.295826 | orchestrator | Monday 08 September 2025 01:04:11 +0000 (0:00:04.721) 0:04:26.404 ****** 2025-09-08 01:09:16.295839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.295868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.295885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.295900 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.295911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.295925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.295935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.295945 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.295955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.295972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.295988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.295998 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.296009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.296023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296034 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.296044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.296054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296070 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.296080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.296095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296105 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.296115 | orchestrator | 2025-09-08 01:09:16.296124 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-08 01:09:16.296134 | orchestrator | Monday 08 September 2025 01:04:14 +0000 (0:00:02.975) 0:04:29.380 ****** 2025-09-08 01:09:16.296144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.296159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.296170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.296188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.296198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296224 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.296234 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.296249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.296259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296269 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.296279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.296296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.296311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296321 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.296331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.296345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296356 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.296366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.296382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.296392 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.296401 | orchestrator | 2025-09-08 01:09:16.296411 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:09:16.296421 | orchestrator | Monday 08 September 2025 01:04:18 +0000 (0:00:03.907) 0:04:33.287 ****** 2025-09-08 01:09:16.296431 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.296440 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.296450 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.296459 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:09:16.296469 | orchestrator | 2025-09-08 01:09:16.296479 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-08 01:09:16.296488 | orchestrator | Monday 08 September 2025 01:04:19 +0000 (0:00:01.504) 0:04:34.791 ****** 2025-09-08 01:09:16.296498 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:09:16.296507 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 01:09:16.296517 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 01:09:16.296526 | orchestrator | 2025-09-08 01:09:16.296536 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-08 01:09:16.296545 | orchestrator | Monday 08 September 2025 01:04:21 +0000 (0:00:02.248) 0:04:37.040 ****** 2025-09-08 01:09:16.296555 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:09:16.296564 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 01:09:16.296574 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 01:09:16.296583 | orchestrator | 2025-09-08 01:09:16.296593 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-08 01:09:16.296602 | orchestrator | Monday 08 September 2025 01:04:24 +0000 (0:00:02.273) 0:04:39.313 ****** 2025-09-08 01:09:16.296612 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:09:16.296622 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:09:16.296632 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:09:16.296641 | orchestrator | 2025-09-08 01:09:16.296651 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-08 01:09:16.296660 | orchestrator | Monday 08 September 2025 01:04:24 +0000 (0:00:00.615) 0:04:39.929 ****** 2025-09-08 01:09:16.296670 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:09:16.296679 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:09:16.296689 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:09:16.296698 | orchestrator | 2025-09-08 01:09:16.296712 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-08 01:09:16.296722 | orchestrator | Monday 08 September 2025 01:04:25 +0000 (0:00:01.312) 0:04:41.244 ****** 2025-09-08 01:09:16.296731 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-08 01:09:16.296741 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-08 01:09:16.296751 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-08 01:09:16.296760 | orchestrator | 2025-09-08 01:09:16.296770 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-08 01:09:16.296779 | orchestrator | Monday 08 September 2025 01:04:27 +0000 (0:00:01.528) 0:04:42.773 ****** 2025-09-08 01:09:16.296789 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-08 01:09:16.296798 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-08 01:09:16.296814 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-08 01:09:16.296823 | orchestrator | 2025-09-08 01:09:16.296833 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-08 01:09:16.296842 | orchestrator | Monday 08 September 2025 01:04:29 +0000 (0:00:01.553) 0:04:44.326 ****** 2025-09-08 01:09:16.296868 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-08 01:09:16.296877 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-08 01:09:16.296887 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-08 01:09:16.296896 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-08 01:09:16.296906 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-08 01:09:16.296923 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-08 01:09:16.296933 | orchestrator | 2025-09-08 01:09:16.296943 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-08 01:09:16.296953 | orchestrator | Monday 08 September 2025 01:04:33 +0000 (0:00:04.059) 0:04:48.386 ****** 2025-09-08 01:09:16.296962 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.296972 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.296981 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.296991 | orchestrator | 2025-09-08 01:09:16.297000 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-08 01:09:16.297010 | orchestrator | Monday 08 September 2025 01:04:33 +0000 (0:00:00.397) 0:04:48.783 ****** 2025-09-08 01:09:16.297019 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.297029 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.297038 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.297048 | orchestrator | 2025-09-08 01:09:16.297057 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-08 01:09:16.297067 | orchestrator | Monday 08 September 2025 01:04:33 +0000 (0:00:00.294) 0:04:49.078 ****** 2025-09-08 01:09:16.297076 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.297086 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.297095 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.297105 | orchestrator | 2025-09-08 01:09:16.297114 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-08 01:09:16.297124 | orchestrator | Monday 08 September 2025 01:04:34 +0000 (0:00:01.161) 0:04:50.239 ****** 2025-09-08 01:09:16.297134 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-08 01:09:16.297144 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-08 01:09:16.297154 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-08 01:09:16.297164 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-08 01:09:16.297174 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-08 01:09:16.297183 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-08 01:09:16.297193 | orchestrator | 2025-09-08 01:09:16.297203 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-08 01:09:16.297212 | orchestrator | Monday 08 September 2025 01:04:39 +0000 (0:00:04.376) 0:04:54.616 ****** 2025-09-08 01:09:16.297222 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 01:09:16.297232 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 01:09:16.297241 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 01:09:16.297251 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 01:09:16.297268 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.297277 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 01:09:16.297287 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.297296 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 01:09:16.297306 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.297315 | orchestrator | 2025-09-08 01:09:16.297325 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-08 01:09:16.297334 | orchestrator | Monday 08 September 2025 01:04:42 +0000 (0:00:03.343) 0:04:57.959 ****** 2025-09-08 01:09:16.297344 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.297353 | orchestrator | 2025-09-08 01:09:16.297363 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-08 01:09:16.297373 | orchestrator | Monday 08 September 2025 01:04:42 +0000 (0:00:00.163) 0:04:58.122 ****** 2025-09-08 01:09:16.297382 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.297392 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.297401 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.297416 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.297426 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.297435 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.297445 | orchestrator | 2025-09-08 01:09:16.297454 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-08 01:09:16.297464 | orchestrator | Monday 08 September 2025 01:04:43 +0000 (0:00:00.615) 0:04:58.738 ****** 2025-09-08 01:09:16.297473 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:09:16.297483 | orchestrator | 2025-09-08 01:09:16.297493 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-08 01:09:16.297502 | orchestrator | Monday 08 September 2025 01:04:44 +0000 (0:00:00.698) 0:04:59.437 ****** 2025-09-08 01:09:16.297512 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.297521 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.297531 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.297540 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.297550 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.297559 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.297569 | orchestrator | 2025-09-08 01:09:16.297578 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-08 01:09:16.297588 | orchestrator | Monday 08 September 2025 01:04:45 +0000 (0:00:00.823) 0:05:00.260 ****** 2025-09-08 01:09:16.297603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297798 | orchestrator | 2025-09-08 01:09:16.297808 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-08 01:09:16.297817 | orchestrator | Monday 08 September 2025 01:04:48 +0000 (0:00:03.968) 0:05:04.229 ****** 2025-09-08 01:09:16.297827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.297843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.297873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.297884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.297902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.297912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.297981 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.297994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.298122 | orchestrator | 2025-09-08 01:09:16.298132 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-08 01:09:16.298142 | orchestrator | Monday 08 September 2025 01:04:55 +0000 (0:00:06.337) 0:05:10.567 ****** 2025-09-08 01:09:16.298152 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.298167 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.298177 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.298187 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.298196 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.298205 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.298215 | orchestrator | 2025-09-08 01:09:16.298224 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-08 01:09:16.298234 | orchestrator | Monday 08 September 2025 01:04:56 +0000 (0:00:01.373) 0:05:11.940 ****** 2025-09-08 01:09:16.298243 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-08 01:09:16.298253 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-08 01:09:16.298262 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-08 01:09:16.298272 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-08 01:09:16.298281 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-08 01:09:16.298291 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.298300 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-08 01:09:16.298310 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-08 01:09:16.298319 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-08 01:09:16.298329 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.298338 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-08 01:09:16.298348 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.298357 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-08 01:09:16.298366 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-08 01:09:16.298376 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-08 01:09:16.298385 | orchestrator | 2025-09-08 01:09:16.298395 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-08 01:09:16.298404 | orchestrator | Monday 08 September 2025 01:05:00 +0000 (0:00:03.846) 0:05:15.787 ****** 2025-09-08 01:09:16.298413 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.298423 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.298432 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.298442 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.298451 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.298460 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.298470 | orchestrator | 2025-09-08 01:09:16.298479 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-08 01:09:16.298488 | orchestrator | Monday 08 September 2025 01:05:01 +0000 (0:00:00.550) 0:05:16.338 ****** 2025-09-08 01:09:16.298498 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-08 01:09:16.298508 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-08 01:09:16.298517 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-08 01:09:16.298526 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-08 01:09:16.298536 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-08 01:09:16.298663 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-08 01:09:16.298689 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-08 01:09:16.298699 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-08 01:09:16.298709 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-08 01:09:16.298718 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-08 01:09:16.298728 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.298737 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-08 01:09:16.298747 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.298756 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-08 01:09:16.298766 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.298775 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:09:16.298789 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:09:16.298799 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:09:16.298809 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:09:16.298818 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:09:16.298828 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:09:16.298837 | orchestrator | 2025-09-08 01:09:16.298847 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-08 01:09:16.298912 | orchestrator | Monday 08 September 2025 01:05:07 +0000 (0:00:05.911) 0:05:22.250 ****** 2025-09-08 01:09:16.298922 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 01:09:16.298932 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 01:09:16.298942 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 01:09:16.298951 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-08 01:09:16.298961 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 01:09:16.298971 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 01:09:16.298980 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 01:09:16.298990 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-08 01:09:16.298999 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-08 01:09:16.299009 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 01:09:16.299018 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 01:09:16.299028 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-08 01:09:16.299038 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.299047 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 01:09:16.299057 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-08 01:09:16.299065 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.299072 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-08 01:09:16.299087 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.299095 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 01:09:16.299103 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 01:09:16.299111 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 01:09:16.299119 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 01:09:16.299126 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 01:09:16.299134 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 01:09:16.299142 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 01:09:16.299149 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 01:09:16.299163 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 01:09:16.299171 | orchestrator | 2025-09-08 01:09:16.299178 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-08 01:09:16.299186 | orchestrator | Monday 08 September 2025 01:05:13 +0000 (0:00:06.927) 0:05:29.177 ****** 2025-09-08 01:09:16.299194 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.299202 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.299210 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.299218 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.299225 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.299233 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.299241 | orchestrator | 2025-09-08 01:09:16.299248 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-08 01:09:16.299256 | orchestrator | Monday 08 September 2025 01:05:14 +0000 (0:00:00.919) 0:05:30.096 ****** 2025-09-08 01:09:16.299264 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.299272 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.299279 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.299287 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.299295 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.299302 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.299310 | orchestrator | 2025-09-08 01:09:16.299318 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-08 01:09:16.299325 | orchestrator | Monday 08 September 2025 01:05:15 +0000 (0:00:00.806) 0:05:30.903 ****** 2025-09-08 01:09:16.299333 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.299341 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.299349 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.299356 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.299368 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.299376 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.299384 | orchestrator | 2025-09-08 01:09:16.299392 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-08 01:09:16.299400 | orchestrator | Monday 08 September 2025 01:05:18 +0000 (0:00:02.453) 0:05:33.357 ****** 2025-09-08 01:09:16.299408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.299424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.299432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.299441 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.299454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.299462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.299475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.299483 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.299498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:09:16.299506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:09:16.299514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.299526 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.299535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.299546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.299555 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.299563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.299576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.299584 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.299592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:09:16.299601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:09:16.299609 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.299616 | orchestrator | 2025-09-08 01:09:16.299624 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-08 01:09:16.299632 | orchestrator | Monday 08 September 2025 01:05:19 +0000 (0:00:01.427) 0:05:34.784 ****** 2025-09-08 01:09:16.299640 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-08 01:09:16.299648 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-08 01:09:16.299656 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-08 01:09:16.299667 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-08 01:09:16.299675 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.299683 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.299691 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-08 01:09:16.299699 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-08 01:09:16.299706 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-08 01:09:16.299714 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-08 01:09:16.299722 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.299730 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-08 01:09:16.299737 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-08 01:09:16.299745 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.299753 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.299761 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-08 01:09:16.299768 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-08 01:09:16.299776 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.299784 | orchestrator | 2025-09-08 01:09:16.299797 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-08 01:09:16.299804 | orchestrator | Monday 08 September 2025 01:05:21 +0000 (0:00:01.645) 0:05:36.430 ****** 2025-09-08 01:09:16.299816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:09:16.299984 | orchestrator | 2025-09-08 01:09:16.299992 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:09:16.300000 | orchestrator | Monday 08 September 2025 01:05:25 +0000 (0:00:04.662) 0:05:41.092 ****** 2025-09-08 01:09:16.300008 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.300016 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.300024 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.300032 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.300040 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.300047 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.300055 | orchestrator | 2025-09-08 01:09:16.300063 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:09:16.300071 | orchestrator | Monday 08 September 2025 01:05:26 +0000 (0:00:00.796) 0:05:41.888 ****** 2025-09-08 01:09:16.300079 | orchestrator | 2025-09-08 01:09:16.300086 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:09:16.300094 | orchestrator | Monday 08 September 2025 01:05:26 +0000 (0:00:00.135) 0:05:42.024 ****** 2025-09-08 01:09:16.300102 | orchestrator | 2025-09-08 01:09:16.300110 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:09:16.300118 | orchestrator | Monday 08 September 2025 01:05:26 +0000 (0:00:00.123) 0:05:42.148 ****** 2025-09-08 01:09:16.300130 | orchestrator | 2025-09-08 01:09:16.300138 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:09:16.300146 | orchestrator | Monday 08 September 2025 01:05:27 +0000 (0:00:00.110) 0:05:42.259 ****** 2025-09-08 01:09:16.300154 | orchestrator | 2025-09-08 01:09:16.300165 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:09:16.300173 | orchestrator | Monday 08 September 2025 01:05:27 +0000 (0:00:00.101) 0:05:42.360 ****** 2025-09-08 01:09:16.300181 | orchestrator | 2025-09-08 01:09:16.300189 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:09:16.300196 | orchestrator | Monday 08 September 2025 01:05:27 +0000 (0:00:00.098) 0:05:42.458 ****** 2025-09-08 01:09:16.300204 | orchestrator | 2025-09-08 01:09:16.300212 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-08 01:09:16.300220 | orchestrator | Monday 08 September 2025 01:05:27 +0000 (0:00:00.198) 0:05:42.657 ****** 2025-09-08 01:09:16.300227 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.300235 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.300243 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.300251 | orchestrator | 2025-09-08 01:09:16.300258 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-08 01:09:16.300266 | orchestrator | Monday 08 September 2025 01:06:20 +0000 (0:00:52.642) 0:06:35.300 ****** 2025-09-08 01:09:16.300274 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.300282 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.300290 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.300297 | orchestrator | 2025-09-08 01:09:16.300316 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-08 01:09:16.300324 | orchestrator | Monday 08 September 2025 01:06:31 +0000 (0:00:11.870) 0:06:47.171 ****** 2025-09-08 01:09:16.300340 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.300348 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.300356 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.300364 | orchestrator | 2025-09-08 01:09:16.300376 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-08 01:09:16.300384 | orchestrator | Monday 08 September 2025 01:06:55 +0000 (0:00:23.826) 0:07:10.997 ****** 2025-09-08 01:09:16.300391 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.300399 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.300407 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.300415 | orchestrator | 2025-09-08 01:09:16.300423 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-08 01:09:16.300431 | orchestrator | Monday 08 September 2025 01:07:27 +0000 (0:00:32.049) 0:07:43.046 ****** 2025-09-08 01:09:16.300438 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-08 01:09:16.300447 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-08 01:09:16.300455 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-08 01:09:16.300463 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.300470 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.300478 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.300486 | orchestrator | 2025-09-08 01:09:16.300494 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-08 01:09:16.300501 | orchestrator | Monday 08 September 2025 01:07:34 +0000 (0:00:06.232) 0:07:49.281 ****** 2025-09-08 01:09:16.300509 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.300517 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.300525 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.300532 | orchestrator | 2025-09-08 01:09:16.300540 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-08 01:09:16.300548 | orchestrator | Monday 08 September 2025 01:07:34 +0000 (0:00:00.962) 0:07:50.243 ****** 2025-09-08 01:09:16.300561 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:09:16.300569 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:09:16.300576 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:09:16.300584 | orchestrator | 2025-09-08 01:09:16.300592 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-08 01:09:16.300600 | orchestrator | Monday 08 September 2025 01:08:02 +0000 (0:00:27.892) 0:08:18.136 ****** 2025-09-08 01:09:16.300608 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.300615 | orchestrator | 2025-09-08 01:09:16.300623 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-08 01:09:16.300631 | orchestrator | Monday 08 September 2025 01:08:03 +0000 (0:00:00.129) 0:08:18.265 ****** 2025-09-08 01:09:16.300639 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.300646 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.300654 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.300662 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.300669 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.300677 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-08 01:09:16.300686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:09:16.300694 | orchestrator | 2025-09-08 01:09:16.300702 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-08 01:09:16.300709 | orchestrator | Monday 08 September 2025 01:08:27 +0000 (0:00:24.050) 0:08:42.315 ****** 2025-09-08 01:09:16.300717 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.300725 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.300733 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.300740 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.300748 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.300756 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.300763 | orchestrator | 2025-09-08 01:09:16.300771 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-08 01:09:16.300779 | orchestrator | Monday 08 September 2025 01:08:37 +0000 (0:00:10.290) 0:08:52.606 ****** 2025-09-08 01:09:16.300787 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.300794 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.300802 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.300810 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.300817 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.300829 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-08 01:09:16.300837 | orchestrator | 2025-09-08 01:09:16.300844 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-08 01:09:16.300866 | orchestrator | Monday 08 September 2025 01:08:41 +0000 (0:00:03.783) 0:08:56.390 ****** 2025-09-08 01:09:16.300875 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:09:16.300882 | orchestrator | 2025-09-08 01:09:16.300890 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-08 01:09:16.300898 | orchestrator | Monday 08 September 2025 01:08:53 +0000 (0:00:12.190) 0:09:08.581 ****** 2025-09-08 01:09:16.300906 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:09:16.300913 | orchestrator | 2025-09-08 01:09:16.300921 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-08 01:09:16.300929 | orchestrator | Monday 08 September 2025 01:08:54 +0000 (0:00:01.321) 0:09:09.902 ****** 2025-09-08 01:09:16.300937 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.300944 | orchestrator | 2025-09-08 01:09:16.300952 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-08 01:09:16.300960 | orchestrator | Monday 08 September 2025 01:08:55 +0000 (0:00:01.245) 0:09:11.148 ****** 2025-09-08 01:09:16.300968 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:09:16.300982 | orchestrator | 2025-09-08 01:09:16.300990 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-08 01:09:16.300997 | orchestrator | Monday 08 September 2025 01:09:06 +0000 (0:00:10.284) 0:09:21.432 ****** 2025-09-08 01:09:16.301005 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:09:16.301013 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:09:16.301024 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:09:16.301033 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:16.301040 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:16.301048 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:16.301056 | orchestrator | 2025-09-08 01:09:16.301064 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-08 01:09:16.301071 | orchestrator | 2025-09-08 01:09:16.301079 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-08 01:09:16.301087 | orchestrator | Monday 08 September 2025 01:09:08 +0000 (0:00:01.833) 0:09:23.266 ****** 2025-09-08 01:09:16.301095 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:16.301103 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:16.301110 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:16.301118 | orchestrator | 2025-09-08 01:09:16.301126 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-08 01:09:16.301134 | orchestrator | 2025-09-08 01:09:16.301141 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-08 01:09:16.301149 | orchestrator | Monday 08 September 2025 01:09:09 +0000 (0:00:01.157) 0:09:24.424 ****** 2025-09-08 01:09:16.301157 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.301165 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.301172 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.301180 | orchestrator | 2025-09-08 01:09:16.301188 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-08 01:09:16.301196 | orchestrator | 2025-09-08 01:09:16.301203 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-08 01:09:16.301211 | orchestrator | Monday 08 September 2025 01:09:09 +0000 (0:00:00.504) 0:09:24.929 ****** 2025-09-08 01:09:16.301219 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-08 01:09:16.301227 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-08 01:09:16.301235 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-08 01:09:16.301242 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-08 01:09:16.301250 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-08 01:09:16.301258 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-08 01:09:16.301266 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:09:16.301274 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-08 01:09:16.301281 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-08 01:09:16.301289 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-08 01:09:16.301297 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-08 01:09:16.301305 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-08 01:09:16.301312 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-08 01:09:16.301320 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:09:16.301328 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-08 01:09:16.301335 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-08 01:09:16.301343 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-08 01:09:16.301351 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-08 01:09:16.301358 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-08 01:09:16.301366 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-08 01:09:16.301374 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:09:16.301388 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-08 01:09:16.301397 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-08 01:09:16.301411 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-08 01:09:16.301424 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-08 01:09:16.301438 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-08 01:09:16.301450 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-08 01:09:16.301463 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.301476 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-08 01:09:16.301489 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-08 01:09:16.301507 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-08 01:09:16.301520 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-08 01:09:16.301534 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-08 01:09:16.301548 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-08 01:09:16.301562 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.301571 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-08 01:09:16.301579 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-08 01:09:16.301587 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-08 01:09:16.301595 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-08 01:09:16.301603 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-08 01:09:16.301610 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-08 01:09:16.301618 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.301626 | orchestrator | 2025-09-08 01:09:16.301633 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-08 01:09:16.301641 | orchestrator | 2025-09-08 01:09:16.301649 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-08 01:09:16.301657 | orchestrator | Monday 08 September 2025 01:09:11 +0000 (0:00:01.397) 0:09:26.326 ****** 2025-09-08 01:09:16.301664 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-08 01:09:16.301672 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-08 01:09:16.301680 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.301692 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-08 01:09:16.301700 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-08 01:09:16.301708 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.301716 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-08 01:09:16.301723 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-08 01:09:16.301731 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.301739 | orchestrator | 2025-09-08 01:09:16.301747 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-08 01:09:16.301754 | orchestrator | 2025-09-08 01:09:16.301762 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-08 01:09:16.301770 | orchestrator | Monday 08 September 2025 01:09:11 +0000 (0:00:00.776) 0:09:27.103 ****** 2025-09-08 01:09:16.301777 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.301785 | orchestrator | 2025-09-08 01:09:16.301793 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-08 01:09:16.301800 | orchestrator | 2025-09-08 01:09:16.301808 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-08 01:09:16.301816 | orchestrator | Monday 08 September 2025 01:09:12 +0000 (0:00:00.654) 0:09:27.758 ****** 2025-09-08 01:09:16.301824 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:16.301831 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:16.301839 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:16.301897 | orchestrator | 2025-09-08 01:09:16.301908 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:09:16.301916 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:09:16.301924 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-08 01:09:16.301933 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-08 01:09:16.301941 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-08 01:09:16.301949 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-08 01:09:16.301957 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-08 01:09:16.301965 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-08 01:09:16.301973 | orchestrator | 2025-09-08 01:09:16.301981 | orchestrator | 2025-09-08 01:09:16.301989 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:09:16.301997 | orchestrator | Monday 08 September 2025 01:09:12 +0000 (0:00:00.416) 0:09:28.174 ****** 2025-09-08 01:09:16.302005 | orchestrator | =============================================================================== 2025-09-08 01:09:16.302013 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 52.64s 2025-09-08 01:09:16.302059 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 32.05s 2025-09-08 01:09:16.302066 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 27.91s 2025-09-08 01:09:16.302073 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.89s 2025-09-08 01:09:16.302079 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.06s 2025-09-08 01:09:16.302086 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.05s 2025-09-08 01:09:16.302093 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.83s 2025-09-08 01:09:16.302104 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.67s 2025-09-08 01:09:16.302111 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.51s 2025-09-08 01:09:16.302117 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.59s 2025-09-08 01:09:16.302124 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.19s 2025-09-08 01:09:16.302130 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.87s 2025-09-08 01:09:16.302137 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.73s 2025-09-08 01:09:16.302144 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.51s 2025-09-08 01:09:16.302150 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.03s 2025-09-08 01:09:16.302157 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.29s 2025-09-08 01:09:16.302163 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.28s 2025-09-08 01:09:16.302170 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.32s 2025-09-08 01:09:16.302176 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.75s 2025-09-08 01:09:16.302183 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.51s 2025-09-08 01:09:16.302194 | orchestrator | 2025-09-08 01:09:16 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:16.302207 | orchestrator | 2025-09-08 01:09:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:19.334741 | orchestrator | 2025-09-08 01:09:19 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:19.335303 | orchestrator | 2025-09-08 01:09:19 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:19.337992 | orchestrator | 2025-09-08 01:09:19 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:19.338052 | orchestrator | 2025-09-08 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:22.379323 | orchestrator | 2025-09-08 01:09:22 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:22.379937 | orchestrator | 2025-09-08 01:09:22 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:22.381718 | orchestrator | 2025-09-08 01:09:22 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:22.381740 | orchestrator | 2025-09-08 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:25.426558 | orchestrator | 2025-09-08 01:09:25 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:25.427818 | orchestrator | 2025-09-08 01:09:25 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:25.430656 | orchestrator | 2025-09-08 01:09:25 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:25.430687 | orchestrator | 2025-09-08 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:28.481337 | orchestrator | 2025-09-08 01:09:28 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:28.483980 | orchestrator | 2025-09-08 01:09:28 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:28.486278 | orchestrator | 2025-09-08 01:09:28 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:28.486318 | orchestrator | 2025-09-08 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:31.520174 | orchestrator | 2025-09-08 01:09:31 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:31.523171 | orchestrator | 2025-09-08 01:09:31 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:31.525452 | orchestrator | 2025-09-08 01:09:31 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:31.525811 | orchestrator | 2025-09-08 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:34.572703 | orchestrator | 2025-09-08 01:09:34 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:34.573636 | orchestrator | 2025-09-08 01:09:34 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:34.574565 | orchestrator | 2025-09-08 01:09:34 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:34.574607 | orchestrator | 2025-09-08 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:37.620609 | orchestrator | 2025-09-08 01:09:37 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:37.622445 | orchestrator | 2025-09-08 01:09:37 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:37.625781 | orchestrator | 2025-09-08 01:09:37 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:37.625953 | orchestrator | 2025-09-08 01:09:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:40.672969 | orchestrator | 2025-09-08 01:09:40 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:40.674505 | orchestrator | 2025-09-08 01:09:40 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:40.676528 | orchestrator | 2025-09-08 01:09:40 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:40.677177 | orchestrator | 2025-09-08 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:43.716645 | orchestrator | 2025-09-08 01:09:43 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:43.719458 | orchestrator | 2025-09-08 01:09:43 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:43.722548 | orchestrator | 2025-09-08 01:09:43 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:43.722837 | orchestrator | 2025-09-08 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:46.770935 | orchestrator | 2025-09-08 01:09:46 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:46.772198 | orchestrator | 2025-09-08 01:09:46 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:46.773743 | orchestrator | 2025-09-08 01:09:46 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:46.773788 | orchestrator | 2025-09-08 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:49.818096 | orchestrator | 2025-09-08 01:09:49 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:49.820006 | orchestrator | 2025-09-08 01:09:49 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:49.821524 | orchestrator | 2025-09-08 01:09:49 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:49.821547 | orchestrator | 2025-09-08 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:52.865053 | orchestrator | 2025-09-08 01:09:52 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:52.866388 | orchestrator | 2025-09-08 01:09:52 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:52.868362 | orchestrator | 2025-09-08 01:09:52 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:52.868639 | orchestrator | 2025-09-08 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:55.910417 | orchestrator | 2025-09-08 01:09:55 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:55.912677 | orchestrator | 2025-09-08 01:09:55 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:55.914692 | orchestrator | 2025-09-08 01:09:55 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state STARTED 2025-09-08 01:09:55.914720 | orchestrator | 2025-09-08 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:58.962563 | orchestrator | 2025-09-08 01:09:58 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:09:58.965223 | orchestrator | 2025-09-08 01:09:58 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:09:58.970108 | orchestrator | 2025-09-08 01:09:58 | INFO  | Task 04fca702-3f14-46db-add6-1f4ecc94879f is in state SUCCESS 2025-09-08 01:09:58.973018 | orchestrator | 2025-09-08 01:09:58.973050 | orchestrator | 2025-09-08 01:09:58.973063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:09:58.973176 | orchestrator | 2025-09-08 01:09:58.973263 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:09:58.973276 | orchestrator | Monday 08 September 2025 01:07:36 +0000 (0:00:00.414) 0:00:00.414 ****** 2025-09-08 01:09:58.973288 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:58.973799 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:58.973823 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:58.973901 | orchestrator | 2025-09-08 01:09:58.973937 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:09:58.973949 | orchestrator | Monday 08 September 2025 01:07:37 +0000 (0:00:00.384) 0:00:00.799 ****** 2025-09-08 01:09:58.973960 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-08 01:09:58.973972 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-08 01:09:58.974333 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-08 01:09:58.974346 | orchestrator | 2025-09-08 01:09:58.974357 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-08 01:09:58.974368 | orchestrator | 2025-09-08 01:09:58.974379 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-08 01:09:58.974390 | orchestrator | Monday 08 September 2025 01:07:37 +0000 (0:00:00.590) 0:00:01.390 ****** 2025-09-08 01:09:58.974401 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:58.974413 | orchestrator | 2025-09-08 01:09:58.974423 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-08 01:09:58.974434 | orchestrator | Monday 08 September 2025 01:07:38 +0000 (0:00:00.793) 0:00:02.184 ****** 2025-09-08 01:09:58.974470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.974486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.974498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.974510 | orchestrator | 2025-09-08 01:09:58.974521 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-08 01:09:58.974532 | orchestrator | Monday 08 September 2025 01:07:39 +0000 (0:00:00.951) 0:00:03.135 ****** 2025-09-08 01:09:58.974557 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-08 01:09:58.974569 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-08 01:09:58.974581 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:09:58.974592 | orchestrator | 2025-09-08 01:09:58.974602 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-08 01:09:58.974613 | orchestrator | Monday 08 September 2025 01:07:40 +0000 (0:00:00.932) 0:00:04.068 ****** 2025-09-08 01:09:58.974624 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:58.974635 | orchestrator | 2025-09-08 01:09:58.974646 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-08 01:09:58.974657 | orchestrator | Monday 08 September 2025 01:07:41 +0000 (0:00:00.975) 0:00:05.043 ****** 2025-09-08 01:09:58.974710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.974724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.974743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.974754 | orchestrator | 2025-09-08 01:09:58.974766 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-08 01:09:58.974777 | orchestrator | Monday 08 September 2025 01:07:42 +0000 (0:00:01.405) 0:00:06.449 ****** 2025-09-08 01:09:58.974788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:58.974800 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:58.974818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:58.974830 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:58.974870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:58.974883 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:58.974894 | orchestrator | 2025-09-08 01:09:58.974905 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-08 01:09:58.974946 | orchestrator | Monday 08 September 2025 01:07:43 +0000 (0:00:00.450) 0:00:06.899 ****** 2025-09-08 01:09:58.974957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:58.974968 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:58.974980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:58.974991 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:58.975008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:58.975020 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:58.975031 | orchestrator | 2025-09-08 01:09:58.975050 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-08 01:09:58.975062 | orchestrator | Monday 08 September 2025 01:07:44 +0000 (0:00:00.798) 0:00:07.698 ****** 2025-09-08 01:09:58.975073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.975085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.975128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.975142 | orchestrator | 2025-09-08 01:09:58.975153 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-08 01:09:58.975164 | orchestrator | Monday 08 September 2025 01:07:45 +0000 (0:00:01.219) 0:00:08.917 ****** 2025-09-08 01:09:58.975175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.975191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.975203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.975222 | orchestrator | 2025-09-08 01:09:58.975233 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-08 01:09:58.975244 | orchestrator | Monday 08 September 2025 01:07:46 +0000 (0:00:01.317) 0:00:10.234 ****** 2025-09-08 01:09:58.975255 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:58.975265 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:58.975276 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:58.975287 | orchestrator | 2025-09-08 01:09:58.975298 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-08 01:09:58.975309 | orchestrator | Monday 08 September 2025 01:07:47 +0000 (0:00:00.508) 0:00:10.743 ****** 2025-09-08 01:09:58.975319 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-08 01:09:58.975330 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-08 01:09:58.975341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-08 01:09:58.975352 | orchestrator | 2025-09-08 01:09:58.975362 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-08 01:09:58.975373 | orchestrator | Monday 08 September 2025 01:07:48 +0000 (0:00:01.276) 0:00:12.019 ****** 2025-09-08 01:09:58.975384 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-08 01:09:58.975395 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-08 01:09:58.975406 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-08 01:09:58.975417 | orchestrator | 2025-09-08 01:09:58.975427 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-08 01:09:58.975438 | orchestrator | Monday 08 September 2025 01:07:49 +0000 (0:00:01.267) 0:00:13.287 ****** 2025-09-08 01:09:58.975477 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:09:58.975490 | orchestrator | 2025-09-08 01:09:58.975501 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-08 01:09:58.975512 | orchestrator | Monday 08 September 2025 01:07:50 +0000 (0:00:00.751) 0:00:14.039 ****** 2025-09-08 01:09:58.975522 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-08 01:09:58.975533 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-08 01:09:58.975544 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:58.975555 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:58.975566 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:58.975576 | orchestrator | 2025-09-08 01:09:58.975587 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-08 01:09:58.975598 | orchestrator | Monday 08 September 2025 01:07:51 +0000 (0:00:00.755) 0:00:14.794 ****** 2025-09-08 01:09:58.975608 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:58.975619 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:58.975630 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:58.975640 | orchestrator | 2025-09-08 01:09:58.975651 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-08 01:09:58.975662 | orchestrator | Monday 08 September 2025 01:07:51 +0000 (0:00:00.494) 0:00:15.288 ****** 2025-09-08 01:09:58.975673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1912357, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2574477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1912357, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2574477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1912357, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2574477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1912373, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2682438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1912373, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2682438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1912373, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2682438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1912360, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2594478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1912360, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2594478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1912360, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2594478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1912374, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2694478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1912374, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2694478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1912374, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2694478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1912365, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2636335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.975993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1912365, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2636335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1912365, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2636335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1912370, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2666755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1912370, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2666755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1912370, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2666755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1912356, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.255935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1912356, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.255935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1912356, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.255935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1912358, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2584476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1912358, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2584476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1912358, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2584476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1912361, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2604477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1912361, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2604477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1912361, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2604477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1912367, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2644477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1912367, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2644477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1912367, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2644477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1912372, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2674477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1912372, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2674477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1912372, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2674477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1912359, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2594478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1912359, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2594478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1912359, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2594478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1912369, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2663064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1912369, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2663064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1912369, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2663064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1912366, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.264177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1912366, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.264177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1912366, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.264177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1912364, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.262591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1912364, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.262591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1912364, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.262591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1912363, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2614477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1912363, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2614477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1912363, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2614477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1912368, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2654712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1912368, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2654712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1912368, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2654712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1912362, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2614477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1912362, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2614477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1912362, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2614477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1912371, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2673633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1912371, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2673633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1912371, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2673633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1912397, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.298448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1912397, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.298448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1912397, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.298448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1912382, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.280448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1912382, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.280448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1912382, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.280448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1912379, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2724478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1912379, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2724478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1912379, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2724478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1912386, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2826352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1912386, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2826352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1912386, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2826352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1912376, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2704477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1912376, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2704477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1912376, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2704477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1912390, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2914479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1912390, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2914479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1912390, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2914479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1912387, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2884479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1912387, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2884479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1912387, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2884479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1912391, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2914479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1912391, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2914479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1912391, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2914479, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1912395, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.297448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.976992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1912395, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.297448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1912395, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.297448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1912389, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.290448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1912389, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.290448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1912389, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.290448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1912384, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2816708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1912384, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2816708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1912384, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2816708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1912381, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2764478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1912381, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2764478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1912381, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2764478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1912383, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.280448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1912383, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.280448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1912383, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.280448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1912380, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2754478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1912380, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2754478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1912380, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2754478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1912385, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2822847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1912385, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2822847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1912385, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2822847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1912394, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.296448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1912394, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.296448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1912394, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.296448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1912393, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2935817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1912393, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2935817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1912393, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2935817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1912377, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.271448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1912377, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.271448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1912377, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.271448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1912378, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2724478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1912378, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2724478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1912378, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.2724478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1912388, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.289448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1912388, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.289448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1912388, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.289448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1912392, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.292448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1912392, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.292448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1912392, 'dev': 118, 'nlink': 1, 'atime': 1757290012.0, 'mtime': 1757290012.0, 'ctime': 1757291793.292448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:58.977477 | orchestrator | 2025-09-08 01:09:58.977487 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-08 01:09:58.977497 | orchestrator | Monday 08 September 2025 01:08:30 +0000 (0:00:39.203) 0:00:54.492 ****** 2025-09-08 01:09:58.977513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.977530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.977540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:58.977550 | orchestrator | 2025-09-08 01:09:58.977560 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-08 01:09:58.977569 | orchestrator | Monday 08 September 2025 01:08:32 +0000 (0:00:01.438) 0:00:55.930 ****** 2025-09-08 01:09:58.977579 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:58.977589 | orchestrator | 2025-09-08 01:09:58.977598 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-08 01:09:58.977608 | orchestrator | Monday 08 September 2025 01:08:34 +0000 (0:00:02.410) 0:00:58.341 ****** 2025-09-08 01:09:58.977618 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:58.977627 | orchestrator | 2025-09-08 01:09:58.977637 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-08 01:09:58.977647 | orchestrator | Monday 08 September 2025 01:08:36 +0000 (0:00:02.236) 0:01:00.577 ****** 2025-09-08 01:09:58.977656 | orchestrator | 2025-09-08 01:09:58.977666 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-08 01:09:58.977680 | orchestrator | Monday 08 September 2025 01:08:37 +0000 (0:00:00.059) 0:01:00.637 ****** 2025-09-08 01:09:58.977690 | orchestrator | 2025-09-08 01:09:58.977699 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-08 01:09:58.977709 | orchestrator | Monday 08 September 2025 01:08:37 +0000 (0:00:00.060) 0:01:00.697 ****** 2025-09-08 01:09:58.977719 | orchestrator | 2025-09-08 01:09:58.977728 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-08 01:09:58.977738 | orchestrator | Monday 08 September 2025 01:08:37 +0000 (0:00:00.164) 0:01:00.862 ****** 2025-09-08 01:09:58.977747 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:58.977757 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:58.977767 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:58.977776 | orchestrator | 2025-09-08 01:09:58.977786 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-08 01:09:58.977795 | orchestrator | Monday 08 September 2025 01:08:43 +0000 (0:00:06.756) 0:01:07.619 ****** 2025-09-08 01:09:58.977810 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:58.977820 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:58.977830 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-08 01:09:58.977840 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-08 01:09:58.977850 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-08 01:09:58.977859 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:58.977869 | orchestrator | 2025-09-08 01:09:58.977879 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-08 01:09:58.977888 | orchestrator | Monday 08 September 2025 01:09:22 +0000 (0:00:38.140) 0:01:45.760 ****** 2025-09-08 01:09:58.977898 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:58.977924 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:58.977934 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:58.977943 | orchestrator | 2025-09-08 01:09:58.977953 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-08 01:09:58.977963 | orchestrator | Monday 08 September 2025 01:09:52 +0000 (0:00:30.830) 0:02:16.591 ****** 2025-09-08 01:09:58.977972 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:58.977982 | orchestrator | 2025-09-08 01:09:58.977991 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-08 01:09:58.978001 | orchestrator | Monday 08 September 2025 01:09:55 +0000 (0:00:02.249) 0:02:18.840 ****** 2025-09-08 01:09:58.978010 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:58.978054 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:58.978064 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:58.978074 | orchestrator | 2025-09-08 01:09:58.978083 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-08 01:09:58.978093 | orchestrator | Monday 08 September 2025 01:09:55 +0000 (0:00:00.503) 0:02:19.344 ****** 2025-09-08 01:09:58.978104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-08 01:09:58.978116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-08 01:09:58.978126 | orchestrator | 2025-09-08 01:09:58.978136 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-08 01:09:58.978146 | orchestrator | Monday 08 September 2025 01:09:58 +0000 (0:00:02.350) 0:02:21.695 ****** 2025-09-08 01:09:58.978155 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:58.978165 | orchestrator | 2025-09-08 01:09:58.978174 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:09:58.978185 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:09:58.978196 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:09:58.978206 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:09:58.978215 | orchestrator | 2025-09-08 01:09:58.978225 | orchestrator | 2025-09-08 01:09:58.978235 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:09:58.978244 | orchestrator | Monday 08 September 2025 01:09:58 +0000 (0:00:00.329) 0:02:22.024 ****** 2025-09-08 01:09:58.978262 | orchestrator | =============================================================================== 2025-09-08 01:09:58.978272 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.20s 2025-09-08 01:09:58.978281 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.14s 2025-09-08 01:09:58.978291 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.83s 2025-09-08 01:09:58.978300 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.76s 2025-09-08 01:09:58.978310 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.41s 2025-09-08 01:09:58.978325 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.35s 2025-09-08 01:09:58.978335 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.25s 2025-09-08 01:09:58.978345 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.24s 2025-09-08 01:09:58.978355 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.44s 2025-09-08 01:09:58.978364 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.41s 2025-09-08 01:09:58.978374 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2025-09-08 01:09:58.978383 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2025-09-08 01:09:58.978393 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2025-09-08 01:09:58.978402 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.22s 2025-09-08 01:09:58.978412 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.98s 2025-09-08 01:09:58.978421 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.95s 2025-09-08 01:09:58.978431 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.93s 2025-09-08 01:09:58.978440 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-09-08 01:09:58.978450 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.79s 2025-09-08 01:09:58.978459 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.76s 2025-09-08 01:09:58.978469 | orchestrator | 2025-09-08 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:02.019089 | orchestrator | 2025-09-08 01:10:02 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:02.020192 | orchestrator | 2025-09-08 01:10:02 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:02.020223 | orchestrator | 2025-09-08 01:10:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:05.069364 | orchestrator | 2025-09-08 01:10:05 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:05.071653 | orchestrator | 2025-09-08 01:10:05 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:05.071683 | orchestrator | 2025-09-08 01:10:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:08.113135 | orchestrator | 2025-09-08 01:10:08 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:08.113587 | orchestrator | 2025-09-08 01:10:08 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:08.113734 | orchestrator | 2025-09-08 01:10:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:11.158215 | orchestrator | 2025-09-08 01:10:11 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:11.161224 | orchestrator | 2025-09-08 01:10:11 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:11.161236 | orchestrator | 2025-09-08 01:10:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:14.204468 | orchestrator | 2025-09-08 01:10:14 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:14.206464 | orchestrator | 2025-09-08 01:10:14 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:14.206634 | orchestrator | 2025-09-08 01:10:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:17.247594 | orchestrator | 2025-09-08 01:10:17 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:17.250145 | orchestrator | 2025-09-08 01:10:17 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:17.250521 | orchestrator | 2025-09-08 01:10:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:20.293533 | orchestrator | 2025-09-08 01:10:20 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:20.294823 | orchestrator | 2025-09-08 01:10:20 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:20.294856 | orchestrator | 2025-09-08 01:10:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:23.343915 | orchestrator | 2025-09-08 01:10:23 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:23.345708 | orchestrator | 2025-09-08 01:10:23 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:23.345741 | orchestrator | 2025-09-08 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:26.394150 | orchestrator | 2025-09-08 01:10:26 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:26.397100 | orchestrator | 2025-09-08 01:10:26 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:26.397131 | orchestrator | 2025-09-08 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:29.447843 | orchestrator | 2025-09-08 01:10:29 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:29.448814 | orchestrator | 2025-09-08 01:10:29 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:29.448848 | orchestrator | 2025-09-08 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:32.505343 | orchestrator | 2025-09-08 01:10:32 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:32.505603 | orchestrator | 2025-09-08 01:10:32 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:32.505630 | orchestrator | 2025-09-08 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:35.546861 | orchestrator | 2025-09-08 01:10:35 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:35.551200 | orchestrator | 2025-09-08 01:10:35 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:35.551287 | orchestrator | 2025-09-08 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:38.589827 | orchestrator | 2025-09-08 01:10:38 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:38.592290 | orchestrator | 2025-09-08 01:10:38 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:38.592630 | orchestrator | 2025-09-08 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:41.636425 | orchestrator | 2025-09-08 01:10:41 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:41.638185 | orchestrator | 2025-09-08 01:10:41 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:41.638267 | orchestrator | 2025-09-08 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:44.683900 | orchestrator | 2025-09-08 01:10:44 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:44.685124 | orchestrator | 2025-09-08 01:10:44 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:44.685163 | orchestrator | 2025-09-08 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:47.727652 | orchestrator | 2025-09-08 01:10:47 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:47.727852 | orchestrator | 2025-09-08 01:10:47 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state STARTED 2025-09-08 01:10:47.727871 | orchestrator | 2025-09-08 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:50.774263 | orchestrator | 2025-09-08 01:10:50 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:50.774735 | orchestrator | 2025-09-08 01:10:50 | INFO  | Task cfc744c1-9dcb-4836-bf71-4290e54f3724 is in state SUCCESS 2025-09-08 01:10:50.774768 | orchestrator | 2025-09-08 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:53.815162 | orchestrator | 2025-09-08 01:10:53 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:53.815241 | orchestrator | 2025-09-08 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:56.863345 | orchestrator | 2025-09-08 01:10:56 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:56.863455 | orchestrator | 2025-09-08 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:59.914681 | orchestrator | 2025-09-08 01:10:59 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:10:59.914778 | orchestrator | 2025-09-08 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:02.953339 | orchestrator | 2025-09-08 01:11:02 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:02.953458 | orchestrator | 2025-09-08 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:06.013768 | orchestrator | 2025-09-08 01:11:06 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:06.013885 | orchestrator | 2025-09-08 01:11:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:09.053494 | orchestrator | 2025-09-08 01:11:09 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:09.053614 | orchestrator | 2025-09-08 01:11:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:12.100174 | orchestrator | 2025-09-08 01:11:12 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:12.100300 | orchestrator | 2025-09-08 01:11:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:15.139826 | orchestrator | 2025-09-08 01:11:15 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:15.139951 | orchestrator | 2025-09-08 01:11:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:18.186356 | orchestrator | 2025-09-08 01:11:18 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:18.186467 | orchestrator | 2025-09-08 01:11:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:21.228423 | orchestrator | 2025-09-08 01:11:21 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:21.228546 | orchestrator | 2025-09-08 01:11:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:24.270618 | orchestrator | 2025-09-08 01:11:24 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:24.270754 | orchestrator | 2025-09-08 01:11:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:27.308933 | orchestrator | 2025-09-08 01:11:27 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:27.309085 | orchestrator | 2025-09-08 01:11:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:30.348912 | orchestrator | 2025-09-08 01:11:30 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:30.349056 | orchestrator | 2025-09-08 01:11:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:33.389559 | orchestrator | 2025-09-08 01:11:33 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:33.389701 | orchestrator | 2025-09-08 01:11:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:36.436271 | orchestrator | 2025-09-08 01:11:36 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:36.436383 | orchestrator | 2025-09-08 01:11:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:39.487111 | orchestrator | 2025-09-08 01:11:39 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:39.487232 | orchestrator | 2025-09-08 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:42.530656 | orchestrator | 2025-09-08 01:11:42 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:42.530762 | orchestrator | 2025-09-08 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:45.566324 | orchestrator | 2025-09-08 01:11:45 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:45.566428 | orchestrator | 2025-09-08 01:11:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:48.600439 | orchestrator | 2025-09-08 01:11:48 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:48.600536 | orchestrator | 2025-09-08 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:51.643128 | orchestrator | 2025-09-08 01:11:51 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:51.643271 | orchestrator | 2025-09-08 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:54.685791 | orchestrator | 2025-09-08 01:11:54 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:54.685887 | orchestrator | 2025-09-08 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:57.729408 | orchestrator | 2025-09-08 01:11:57 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:11:57.729511 | orchestrator | 2025-09-08 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:00.766085 | orchestrator | 2025-09-08 01:12:00 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:00.766173 | orchestrator | 2025-09-08 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:03.815171 | orchestrator | 2025-09-08 01:12:03 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:03.815279 | orchestrator | 2025-09-08 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:06.872482 | orchestrator | 2025-09-08 01:12:06 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:06.872590 | orchestrator | 2025-09-08 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:09.918397 | orchestrator | 2025-09-08 01:12:09 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:09.918494 | orchestrator | 2025-09-08 01:12:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:12.975860 | orchestrator | 2025-09-08 01:12:12 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:12.975971 | orchestrator | 2025-09-08 01:12:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:16.023532 | orchestrator | 2025-09-08 01:12:16 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:16.023636 | orchestrator | 2025-09-08 01:12:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:19.068519 | orchestrator | 2025-09-08 01:12:19 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:19.068621 | orchestrator | 2025-09-08 01:12:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:22.108658 | orchestrator | 2025-09-08 01:12:22 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:22.108752 | orchestrator | 2025-09-08 01:12:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:25.148580 | orchestrator | 2025-09-08 01:12:25 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:25.148690 | orchestrator | 2025-09-08 01:12:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:28.192836 | orchestrator | 2025-09-08 01:12:28 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:28.192945 | orchestrator | 2025-09-08 01:12:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:31.242713 | orchestrator | 2025-09-08 01:12:31 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:31.242821 | orchestrator | 2025-09-08 01:12:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:34.288902 | orchestrator | 2025-09-08 01:12:34 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:34.289001 | orchestrator | 2025-09-08 01:12:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:37.338988 | orchestrator | 2025-09-08 01:12:37 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:37.339101 | orchestrator | 2025-09-08 01:12:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:40.385310 | orchestrator | 2025-09-08 01:12:40 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:40.385421 | orchestrator | 2025-09-08 01:12:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:43.430638 | orchestrator | 2025-09-08 01:12:43 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:43.430741 | orchestrator | 2025-09-08 01:12:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:46.475684 | orchestrator | 2025-09-08 01:12:46 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:46.475789 | orchestrator | 2025-09-08 01:12:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:49.522433 | orchestrator | 2025-09-08 01:12:49 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:49.522539 | orchestrator | 2025-09-08 01:12:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:52.566920 | orchestrator | 2025-09-08 01:12:52 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:52.567027 | orchestrator | 2025-09-08 01:12:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:55.610074 | orchestrator | 2025-09-08 01:12:55 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:55.610205 | orchestrator | 2025-09-08 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:58.667396 | orchestrator | 2025-09-08 01:12:58 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:12:58.667510 | orchestrator | 2025-09-08 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:01.719814 | orchestrator | 2025-09-08 01:13:01 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:01.719924 | orchestrator | 2025-09-08 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:04.766805 | orchestrator | 2025-09-08 01:13:04 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:04.766916 | orchestrator | 2025-09-08 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:07.803981 | orchestrator | 2025-09-08 01:13:07 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:07.804114 | orchestrator | 2025-09-08 01:13:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:10.840400 | orchestrator | 2025-09-08 01:13:10 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:10.840506 | orchestrator | 2025-09-08 01:13:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:13.890356 | orchestrator | 2025-09-08 01:13:13 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:13.890465 | orchestrator | 2025-09-08 01:13:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:16.932734 | orchestrator | 2025-09-08 01:13:16 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:16.932833 | orchestrator | 2025-09-08 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:19.975092 | orchestrator | 2025-09-08 01:13:19 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:19.975251 | orchestrator | 2025-09-08 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:23.019591 | orchestrator | 2025-09-08 01:13:23 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:23.019677 | orchestrator | 2025-09-08 01:13:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:26.067477 | orchestrator | 2025-09-08 01:13:26 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:26.067588 | orchestrator | 2025-09-08 01:13:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:29.114631 | orchestrator | 2025-09-08 01:13:29 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state STARTED 2025-09-08 01:13:29.114753 | orchestrator | 2025-09-08 01:13:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:32.166579 | orchestrator | 2025-09-08 01:13:32 | INFO  | Task d10f3906-8a83-458d-84df-c561e80e79b2 is in state SUCCESS 2025-09-08 01:13:32.167939 | orchestrator | 2025-09-08 01:13:32.167977 | orchestrator | 2025-09-08 01:13:32.167988 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-08 01:13:32.167998 | orchestrator | 2025-09-08 01:13:32.168007 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-08 01:13:32.168017 | orchestrator | Monday 08 September 2025 01:04:37 +0000 (0:00:00.147) 0:00:00.147 ****** 2025-09-08 01:13:32.168026 | orchestrator | changed: [localhost] 2025-09-08 01:13:32.168037 | orchestrator | 2025-09-08 01:13:32.168046 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-08 01:13:32.168079 | orchestrator | Monday 08 September 2025 01:04:38 +0000 (0:00:01.272) 0:00:01.420 ****** 2025-09-08 01:13:32.168088 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-08 01:13:32.168097 | orchestrator | 2025-09-08 01:13:32.168159 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.168180 | orchestrator | 2025-09-08 01:13:32.168191 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.168202 | orchestrator | 2025-09-08 01:13:32.168213 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.168223 | orchestrator | 2025-09-08 01:13:32.168234 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.168244 | orchestrator | 2025-09-08 01:13:32.168255 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.168266 | orchestrator | 2025-09-08 01:13:32.168403 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.168417 | orchestrator | 2025-09-08 01:13:32.168429 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.169027 | orchestrator | 2025-09-08 01:13:32.169043 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:32.169054 | orchestrator | changed: [localhost] 2025-09-08 01:13:32.169065 | orchestrator | 2025-09-08 01:13:32.169076 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-08 01:13:32.169087 | orchestrator | Monday 08 September 2025 01:10:38 +0000 (0:06:00.560) 0:06:01.981 ****** 2025-09-08 01:13:32.169448 | orchestrator | changed: [localhost] 2025-09-08 01:13:32.169460 | orchestrator | 2025-09-08 01:13:32.169471 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:13:32.169482 | orchestrator | 2025-09-08 01:13:32.169492 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:13:32.169503 | orchestrator | Monday 08 September 2025 01:10:48 +0000 (0:00:09.314) 0:06:11.295 ****** 2025-09-08 01:13:32.169514 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.169525 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:32.169535 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:32.169546 | orchestrator | 2025-09-08 01:13:32.169556 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:13:32.169567 | orchestrator | Monday 08 September 2025 01:10:48 +0000 (0:00:00.306) 0:06:11.602 ****** 2025-09-08 01:13:32.169578 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-08 01:13:32.169589 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-08 01:13:32.169600 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-08 01:13:32.169611 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-08 01:13:32.169622 | orchestrator | 2025-09-08 01:13:32.169632 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-08 01:13:32.169643 | orchestrator | skipping: no hosts matched 2025-09-08 01:13:32.169655 | orchestrator | 2025-09-08 01:13:32.169665 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:13:32.169677 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:32.169690 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:32.169702 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:32.169713 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:32.169723 | orchestrator | 2025-09-08 01:13:32.169739 | orchestrator | 2025-09-08 01:13:32.169777 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:13:32.169796 | orchestrator | Monday 08 September 2025 01:10:48 +0000 (0:00:00.490) 0:06:12.092 ****** 2025-09-08 01:13:32.169814 | orchestrator | =============================================================================== 2025-09-08 01:13:32.169832 | orchestrator | Download ironic-agent initramfs --------------------------------------- 360.56s 2025-09-08 01:13:32.169851 | orchestrator | Download ironic-agent kernel -------------------------------------------- 9.31s 2025-09-08 01:13:32.169869 | orchestrator | Ensure the destination directory exists --------------------------------- 1.27s 2025-09-08 01:13:32.169890 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-09-08 01:13:32.169908 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-08 01:13:32.169924 | orchestrator | 2025-09-08 01:13:32.169935 | orchestrator | 2025-09-08 01:13:32.169946 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:13:32.169956 | orchestrator | 2025-09-08 01:13:32.169967 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:13:32.169978 | orchestrator | Monday 08 September 2025 01:08:50 +0000 (0:00:00.263) 0:00:00.263 ****** 2025-09-08 01:13:32.169989 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.169999 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:32.170078 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:32.170095 | orchestrator | 2025-09-08 01:13:32.170130 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:13:32.170215 | orchestrator | Monday 08 September 2025 01:08:50 +0000 (0:00:00.293) 0:00:00.557 ****** 2025-09-08 01:13:32.170231 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-08 01:13:32.170244 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-08 01:13:32.170256 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-08 01:13:32.170268 | orchestrator | 2025-09-08 01:13:32.170280 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-08 01:13:32.170293 | orchestrator | 2025-09-08 01:13:32.170305 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:32.170318 | orchestrator | Monday 08 September 2025 01:08:50 +0000 (0:00:00.437) 0:00:00.994 ****** 2025-09-08 01:13:32.170330 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:32.170342 | orchestrator | 2025-09-08 01:13:32.170355 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-08 01:13:32.170368 | orchestrator | Monday 08 September 2025 01:08:51 +0000 (0:00:00.584) 0:00:01.579 ****** 2025-09-08 01:13:32.170381 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-08 01:13:32.170393 | orchestrator | 2025-09-08 01:13:32.170405 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-08 01:13:32.170417 | orchestrator | Monday 08 September 2025 01:08:54 +0000 (0:00:03.572) 0:00:05.151 ****** 2025-09-08 01:13:32.170427 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-08 01:13:32.170438 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-08 01:13:32.170449 | orchestrator | 2025-09-08 01:13:32.170459 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-08 01:13:32.170470 | orchestrator | Monday 08 September 2025 01:09:01 +0000 (0:00:06.522) 0:00:11.674 ****** 2025-09-08 01:13:32.170481 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:13:32.170492 | orchestrator | 2025-09-08 01:13:32.170503 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-08 01:13:32.170513 | orchestrator | Monday 08 September 2025 01:09:04 +0000 (0:00:03.203) 0:00:14.878 ****** 2025-09-08 01:13:32.170524 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:13:32.170535 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-08 01:13:32.170557 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-08 01:13:32.170568 | orchestrator | 2025-09-08 01:13:32.170578 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-08 01:13:32.170589 | orchestrator | Monday 08 September 2025 01:09:12 +0000 (0:00:08.033) 0:00:22.911 ****** 2025-09-08 01:13:32.170599 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:13:32.170610 | orchestrator | 2025-09-08 01:13:32.170621 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-08 01:13:32.170631 | orchestrator | Monday 08 September 2025 01:09:15 +0000 (0:00:03.092) 0:00:26.004 ****** 2025-09-08 01:13:32.170642 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-08 01:13:32.170652 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-08 01:13:32.170663 | orchestrator | 2025-09-08 01:13:32.170674 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-08 01:13:32.170684 | orchestrator | Monday 08 September 2025 01:09:22 +0000 (0:00:07.163) 0:00:33.168 ****** 2025-09-08 01:13:32.170695 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-08 01:13:32.170705 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-08 01:13:32.170716 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-08 01:13:32.170726 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-08 01:13:32.170737 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-08 01:13:32.170747 | orchestrator | 2025-09-08 01:13:32.170758 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:32.170769 | orchestrator | Monday 08 September 2025 01:09:38 +0000 (0:00:15.230) 0:00:48.399 ****** 2025-09-08 01:13:32.170779 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:32.170790 | orchestrator | 2025-09-08 01:13:32.170801 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-08 01:13:32.170811 | orchestrator | Monday 08 September 2025 01:09:38 +0000 (0:00:00.560) 0:00:48.959 ****** 2025-09-08 01:13:32.170822 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.170833 | orchestrator | 2025-09-08 01:13:32.170844 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-08 01:13:32.170855 | orchestrator | Monday 08 September 2025 01:09:43 +0000 (0:00:04.530) 0:00:53.490 ****** 2025-09-08 01:13:32.170865 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.170876 | orchestrator | 2025-09-08 01:13:32.170892 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-08 01:13:32.170911 | orchestrator | Monday 08 September 2025 01:09:47 +0000 (0:00:04.096) 0:00:57.587 ****** 2025-09-08 01:13:32.170929 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.170949 | orchestrator | 2025-09-08 01:13:32.170968 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-08 01:13:32.170987 | orchestrator | Monday 08 September 2025 01:09:50 +0000 (0:00:03.063) 0:01:00.651 ****** 2025-09-08 01:13:32.171008 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-08 01:13:32.171028 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-08 01:13:32.171040 | orchestrator | 2025-09-08 01:13:32.171050 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-08 01:13:32.171098 | orchestrator | Monday 08 September 2025 01:10:00 +0000 (0:00:09.780) 0:01:10.432 ****** 2025-09-08 01:13:32.171137 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-08 01:13:32.171149 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-08 01:13:32.171161 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-08 01:13:32.171181 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-08 01:13:32.171192 | orchestrator | 2025-09-08 01:13:32.171203 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-08 01:13:32.171214 | orchestrator | Monday 08 September 2025 01:10:15 +0000 (0:00:15.319) 0:01:25.751 ****** 2025-09-08 01:13:32.171224 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171235 | orchestrator | 2025-09-08 01:13:32.171246 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-08 01:13:32.171257 | orchestrator | Monday 08 September 2025 01:10:20 +0000 (0:00:04.567) 0:01:30.319 ****** 2025-09-08 01:13:32.171267 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171278 | orchestrator | 2025-09-08 01:13:32.171288 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-08 01:13:32.171299 | orchestrator | Monday 08 September 2025 01:10:25 +0000 (0:00:05.582) 0:01:35.901 ****** 2025-09-08 01:13:32.171310 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:32.171320 | orchestrator | 2025-09-08 01:13:32.171330 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-08 01:13:32.171341 | orchestrator | Monday 08 September 2025 01:10:25 +0000 (0:00:00.207) 0:01:36.108 ****** 2025-09-08 01:13:32.171352 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171362 | orchestrator | 2025-09-08 01:13:32.171373 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:32.171383 | orchestrator | Monday 08 September 2025 01:10:31 +0000 (0:00:05.527) 0:01:41.636 ****** 2025-09-08 01:13:32.171394 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:32.171405 | orchestrator | 2025-09-08 01:13:32.171415 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-08 01:13:32.171426 | orchestrator | Monday 08 September 2025 01:10:32 +0000 (0:00:00.966) 0:01:42.603 ****** 2025-09-08 01:13:32.171436 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.171447 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.171457 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171468 | orchestrator | 2025-09-08 01:13:32.171478 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-08 01:13:32.171489 | orchestrator | Monday 08 September 2025 01:10:38 +0000 (0:00:05.779) 0:01:48.382 ****** 2025-09-08 01:13:32.171500 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171510 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.171521 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.171531 | orchestrator | 2025-09-08 01:13:32.171542 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-08 01:13:32.171553 | orchestrator | Monday 08 September 2025 01:10:42 +0000 (0:00:04.201) 0:01:52.583 ****** 2025-09-08 01:13:32.171563 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171574 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.171585 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.171595 | orchestrator | 2025-09-08 01:13:32.171606 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-08 01:13:32.171616 | orchestrator | Monday 08 September 2025 01:10:43 +0000 (0:00:00.841) 0:01:53.424 ****** 2025-09-08 01:13:32.171627 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:32.171637 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.171648 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:32.171658 | orchestrator | 2025-09-08 01:13:32.171669 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-08 01:13:32.171680 | orchestrator | Monday 08 September 2025 01:10:45 +0000 (0:00:01.968) 0:01:55.393 ****** 2025-09-08 01:13:32.171690 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.171701 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171719 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.171730 | orchestrator | 2025-09-08 01:13:32.171740 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-08 01:13:32.171751 | orchestrator | Monday 08 September 2025 01:10:46 +0000 (0:00:01.297) 0:01:56.690 ****** 2025-09-08 01:13:32.171761 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171772 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.171783 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.171793 | orchestrator | 2025-09-08 01:13:32.171803 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-08 01:13:32.171814 | orchestrator | Monday 08 September 2025 01:10:47 +0000 (0:00:01.228) 0:01:57.919 ****** 2025-09-08 01:13:32.171824 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.171835 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.171846 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171856 | orchestrator | 2025-09-08 01:13:32.171867 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-08 01:13:32.171878 | orchestrator | Monday 08 September 2025 01:10:49 +0000 (0:00:02.019) 0:01:59.938 ****** 2025-09-08 01:13:32.171888 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.171899 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.171910 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.171920 | orchestrator | 2025-09-08 01:13:32.171936 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-08 01:13:32.171948 | orchestrator | Monday 08 September 2025 01:10:51 +0000 (0:00:01.527) 0:02:01.466 ****** 2025-09-08 01:13:32.171958 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.171969 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:32.172013 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:32.172035 | orchestrator | 2025-09-08 01:13:32.172055 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-08 01:13:32.172075 | orchestrator | Monday 08 September 2025 01:10:52 +0000 (0:00:00.886) 0:02:02.352 ****** 2025-09-08 01:13:32.172095 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:32.172149 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:32.172168 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.172182 | orchestrator | 2025-09-08 01:13:32.172193 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:32.172204 | orchestrator | Monday 08 September 2025 01:10:54 +0000 (0:00:02.678) 0:02:05.031 ****** 2025-09-08 01:13:32.172214 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:32.172225 | orchestrator | 2025-09-08 01:13:32.172236 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-08 01:13:32.172246 | orchestrator | Monday 08 September 2025 01:10:55 +0000 (0:00:00.637) 0:02:05.669 ****** 2025-09-08 01:13:32.172257 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.172267 | orchestrator | 2025-09-08 01:13:32.172278 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-08 01:13:32.172289 | orchestrator | Monday 08 September 2025 01:10:59 +0000 (0:00:03.753) 0:02:09.422 ****** 2025-09-08 01:13:32.172299 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.172310 | orchestrator | 2025-09-08 01:13:32.172321 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-08 01:13:32.172331 | orchestrator | Monday 08 September 2025 01:11:02 +0000 (0:00:03.185) 0:02:12.608 ****** 2025-09-08 01:13:32.172342 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-08 01:13:32.172352 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-08 01:13:32.172363 | orchestrator | 2025-09-08 01:13:32.172374 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-08 01:13:32.172384 | orchestrator | Monday 08 September 2025 01:11:09 +0000 (0:00:07.242) 0:02:19.850 ****** 2025-09-08 01:13:32.172395 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.172406 | orchestrator | 2025-09-08 01:13:32.172425 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-08 01:13:32.172442 | orchestrator | Monday 08 September 2025 01:11:12 +0000 (0:00:03.303) 0:02:23.153 ****** 2025-09-08 01:13:32.172460 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:32.172478 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:32.172495 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:32.172512 | orchestrator | 2025-09-08 01:13:32.172530 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-08 01:13:32.172549 | orchestrator | Monday 08 September 2025 01:11:13 +0000 (0:00:00.346) 0:02:23.500 ****** 2025-09-08 01:13:32.172571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.172588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.172651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.172667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.172681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.172701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.172713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.172861 | orchestrator | 2025-09-08 01:13:32.172872 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-08 01:13:32.172883 | orchestrator | Monday 08 September 2025 01:11:15 +0000 (0:00:02.364) 0:02:25.864 ****** 2025-09-08 01:13:32.172894 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:32.172904 | orchestrator | 2025-09-08 01:13:32.172915 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-08 01:13:32.172926 | orchestrator | Monday 08 September 2025 01:11:15 +0000 (0:00:00.136) 0:02:26.001 ****** 2025-09-08 01:13:32.172936 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:32.172947 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:32.172957 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:32.172968 | orchestrator | 2025-09-08 01:13:32.172990 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-08 01:13:32.173001 | orchestrator | Monday 08 September 2025 01:11:16 +0000 (0:00:00.513) 0:02:26.515 ****** 2025-09-08 01:13:32.173042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.173063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.173075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.173134 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:32.173209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.173233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.173263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.173321 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:32.173341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.173361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.173422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.173468 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:32.173479 | orchestrator | 2025-09-08 01:13:32.173490 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:32.173501 | orchestrator | Monday 08 September 2025 01:11:17 +0000 (0:00:00.684) 0:02:27.200 ****** 2025-09-08 01:13:32.173512 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:32.173523 | orchestrator | 2025-09-08 01:13:32.173534 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-08 01:13:32.173545 | orchestrator | Monday 08 September 2025 01:11:17 +0000 (0:00:00.550) 0:02:27.750 ****** 2025-09-08 01:13:32.173556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.173572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.173627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.173641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.173652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.173664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.173675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.173804 | orchestrator | 2025-09-08 01:13:32.173815 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-08 01:13:32.173826 | orchestrator | Monday 08 September 2025 01:11:22 +0000 (0:00:05.305) 0:02:33.056 ****** 2025-09-08 01:13:32.173853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.173865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.173876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.173909 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:32.173921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.173951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.173964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.173986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.173997 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:32.174009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.174062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.174091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.174163 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:32.174183 | orchestrator | 2025-09-08 01:13:32.174201 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-08 01:13:32.174217 | orchestrator | Monday 08 September 2025 01:11:23 +0000 (0:00:00.939) 0:02:33.995 ****** 2025-09-08 01:13:32.174229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.174240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.174260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.174313 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:32.174332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.174351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.174369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.174445 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:32.174476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:32.174496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:32.174517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:32.174548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:32.174559 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:32.174570 | orchestrator | 2025-09-08 01:13:32.174581 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-08 01:13:32.174592 | orchestrator | Monday 08 September 2025 01:11:24 +0000 (0:00:00.863) 0:02:34.859 ****** 2025-09-08 01:13:32.174616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.174629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.174641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.174659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.174670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.174681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.174703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.174824 | orchestrator | 2025-09-08 01:13:32.174835 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-08 01:13:32.174846 | orchestrator | Monday 08 September 2025 01:11:29 +0000 (0:00:05.214) 0:02:40.073 ****** 2025-09-08 01:13:32.174857 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-08 01:13:32.174869 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-08 01:13:32.174879 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-08 01:13:32.174890 | orchestrator | 2025-09-08 01:13:32.174901 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-08 01:13:32.174918 | orchestrator | Monday 08 September 2025 01:11:32 +0000 (0:00:02.131) 0:02:42.205 ****** 2025-09-08 01:13:32.174930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.174941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.174963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.174975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.174987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.175005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.175016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175202 | orchestrator | 2025-09-08 01:13:32.175213 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-08 01:13:32.175224 | orchestrator | Monday 08 September 2025 01:11:48 +0000 (0:00:16.290) 0:02:58.495 ****** 2025-09-08 01:13:32.175235 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.175245 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.175256 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.175267 | orchestrator | 2025-09-08 01:13:32.175278 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-08 01:13:32.175288 | orchestrator | Monday 08 September 2025 01:11:49 +0000 (0:00:01.558) 0:03:00.054 ****** 2025-09-08 01:13:32.175299 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175310 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175321 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175331 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175342 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175353 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175363 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175379 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175390 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175407 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175418 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175429 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175440 | orchestrator | 2025-09-08 01:13:32.175451 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-08 01:13:32.175478 | orchestrator | Monday 08 September 2025 01:11:55 +0000 (0:00:05.457) 0:03:05.511 ****** 2025-09-08 01:13:32.175497 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175515 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175532 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175549 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175564 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175574 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175584 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175593 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175602 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175612 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175621 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175630 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175640 | orchestrator | 2025-09-08 01:13:32.175649 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-08 01:13:32.175659 | orchestrator | Monday 08 September 2025 01:12:00 +0000 (0:00:05.498) 0:03:11.009 ****** 2025-09-08 01:13:32.175669 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175678 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175687 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-08 01:13:32.175697 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175706 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175716 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-08 01:13:32.175725 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175735 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175744 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-08 01:13:32.175754 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175763 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175773 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-08 01:13:32.175782 | orchestrator | 2025-09-08 01:13:32.175792 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-08 01:13:32.175802 | orchestrator | Monday 08 September 2025 01:12:06 +0000 (0:00:05.641) 0:03:16.651 ****** 2025-09-08 01:13:32.175812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.175834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.175852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:32.175862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.175873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.175883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:32.175893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.175999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:32.176009 | orchestrator | 2025-09-08 01:13:32.176019 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:32.176033 | orchestrator | Monday 08 September 2025 01:12:10 +0000 (0:00:03.727) 0:03:20.381 ****** 2025-09-08 01:13:32.176044 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:32.176062 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:32.176078 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:32.176093 | orchestrator | 2025-09-08 01:13:32.176169 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-08 01:13:32.176190 | orchestrator | Monday 08 September 2025 01:12:10 +0000 (0:00:00.417) 0:03:20.798 ****** 2025-09-08 01:13:32.176207 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176220 | orchestrator | 2025-09-08 01:13:32.176238 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-08 01:13:32.176254 | orchestrator | Monday 08 September 2025 01:12:12 +0000 (0:00:02.056) 0:03:22.855 ****** 2025-09-08 01:13:32.176265 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176275 | orchestrator | 2025-09-08 01:13:32.176284 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-08 01:13:32.176294 | orchestrator | Monday 08 September 2025 01:12:14 +0000 (0:00:02.027) 0:03:24.882 ****** 2025-09-08 01:13:32.176304 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176313 | orchestrator | 2025-09-08 01:13:32.176323 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-08 01:13:32.176332 | orchestrator | Monday 08 September 2025 01:12:16 +0000 (0:00:02.214) 0:03:27.097 ****** 2025-09-08 01:13:32.176342 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176351 | orchestrator | 2025-09-08 01:13:32.176361 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-08 01:13:32.176370 | orchestrator | Monday 08 September 2025 01:12:19 +0000 (0:00:02.308) 0:03:29.406 ****** 2025-09-08 01:13:32.176380 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176389 | orchestrator | 2025-09-08 01:13:32.176399 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-08 01:13:32.176408 | orchestrator | Monday 08 September 2025 01:12:40 +0000 (0:00:21.492) 0:03:50.898 ****** 2025-09-08 01:13:32.176418 | orchestrator | 2025-09-08 01:13:32.176428 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-08 01:13:32.176437 | orchestrator | Monday 08 September 2025 01:12:40 +0000 (0:00:00.073) 0:03:50.972 ****** 2025-09-08 01:13:32.176447 | orchestrator | 2025-09-08 01:13:32.176456 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-08 01:13:32.176466 | orchestrator | Monday 08 September 2025 01:12:40 +0000 (0:00:00.081) 0:03:51.054 ****** 2025-09-08 01:13:32.176475 | orchestrator | 2025-09-08 01:13:32.176485 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-08 01:13:32.176495 | orchestrator | Monday 08 September 2025 01:12:40 +0000 (0:00:00.064) 0:03:51.118 ****** 2025-09-08 01:13:32.176504 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176514 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.176523 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.176533 | orchestrator | 2025-09-08 01:13:32.176550 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-08 01:13:32.176560 | orchestrator | Monday 08 September 2025 01:12:57 +0000 (0:00:16.097) 0:04:07.215 ****** 2025-09-08 01:13:32.176570 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176579 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.176589 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.176598 | orchestrator | 2025-09-08 01:13:32.176605 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-08 01:13:32.176613 | orchestrator | Monday 08 September 2025 01:13:03 +0000 (0:00:06.963) 0:04:14.179 ****** 2025-09-08 01:13:32.176621 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176629 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.176637 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.176644 | orchestrator | 2025-09-08 01:13:32.176652 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-08 01:13:32.176660 | orchestrator | Monday 08 September 2025 01:13:09 +0000 (0:00:05.897) 0:04:20.076 ****** 2025-09-08 01:13:32.176668 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176675 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.176683 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.176691 | orchestrator | 2025-09-08 01:13:32.176699 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-08 01:13:32.176706 | orchestrator | Monday 08 September 2025 01:13:20 +0000 (0:00:10.613) 0:04:30.689 ****** 2025-09-08 01:13:32.176714 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:32.176722 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:32.176730 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:32.176738 | orchestrator | 2025-09-08 01:13:32.176745 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:13:32.176753 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:13:32.176762 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:13:32.176770 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:13:32.176778 | orchestrator | 2025-09-08 01:13:32.176786 | orchestrator | 2025-09-08 01:13:32.176794 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:13:32.176802 | orchestrator | Monday 08 September 2025 01:13:31 +0000 (0:00:10.673) 0:04:41.362 ****** 2025-09-08 01:13:32.176809 | orchestrator | =============================================================================== 2025-09-08 01:13:32.176817 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.49s 2025-09-08 01:13:32.176825 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.29s 2025-09-08 01:13:32.176833 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.10s 2025-09-08 01:13:32.176845 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.32s 2025-09-08 01:13:32.176853 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.23s 2025-09-08 01:13:32.176865 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.67s 2025-09-08 01:13:32.176874 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.61s 2025-09-08 01:13:32.176881 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.78s 2025-09-08 01:13:32.176889 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.03s 2025-09-08 01:13:32.176897 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.24s 2025-09-08 01:13:32.176905 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.16s 2025-09-08 01:13:32.176913 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.96s 2025-09-08 01:13:32.176926 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.52s 2025-09-08 01:13:32.176934 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.90s 2025-09-08 01:13:32.176941 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.78s 2025-09-08 01:13:32.176949 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.64s 2025-09-08 01:13:32.176957 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.58s 2025-09-08 01:13:32.176965 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.53s 2025-09-08 01:13:32.176972 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.50s 2025-09-08 01:13:32.176980 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.46s 2025-09-08 01:13:32.176988 | orchestrator | 2025-09-08 01:13:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:35.217367 | orchestrator | 2025-09-08 01:13:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:38.262398 | orchestrator | 2025-09-08 01:13:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:41.303859 | orchestrator | 2025-09-08 01:13:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:44.340561 | orchestrator | 2025-09-08 01:13:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:47.378181 | orchestrator | 2025-09-08 01:13:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:50.417690 | orchestrator | 2025-09-08 01:13:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:53.462224 | orchestrator | 2025-09-08 01:13:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:56.499635 | orchestrator | 2025-09-08 01:13:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:59.542491 | orchestrator | 2025-09-08 01:13:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:02.582901 | orchestrator | 2025-09-08 01:14:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:05.638291 | orchestrator | 2025-09-08 01:14:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:08.672792 | orchestrator | 2025-09-08 01:14:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:11.711851 | orchestrator | 2025-09-08 01:14:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:14.755837 | orchestrator | 2025-09-08 01:14:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:17.800106 | orchestrator | 2025-09-08 01:14:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:20.847578 | orchestrator | 2025-09-08 01:14:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:23.893883 | orchestrator | 2025-09-08 01:14:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:26.938634 | orchestrator | 2025-09-08 01:14:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:29.979792 | orchestrator | 2025-09-08 01:14:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:33.022834 | orchestrator | 2025-09-08 01:14:33.384446 | orchestrator | 2025-09-08 01:14:33.394296 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Sep 8 01:14:33 UTC 2025 2025-09-08 01:14:33.394330 | orchestrator | 2025-09-08 01:14:33.762016 | orchestrator | ok: Runtime: 0:36:03.161895 2025-09-08 01:14:34.026242 | 2025-09-08 01:14:34.026443 | TASK [Bootstrap services] 2025-09-08 01:14:34.800125 | orchestrator | 2025-09-08 01:14:34.800374 | orchestrator | # BOOTSTRAP 2025-09-08 01:14:34.800397 | orchestrator | 2025-09-08 01:14:34.800411 | orchestrator | + set -e 2025-09-08 01:14:34.800425 | orchestrator | + echo 2025-09-08 01:14:34.800440 | orchestrator | + echo '# BOOTSTRAP' 2025-09-08 01:14:34.800458 | orchestrator | + echo 2025-09-08 01:14:34.800504 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-08 01:14:34.808840 | orchestrator | + set -e 2025-09-08 01:14:34.808869 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-08 01:14:39.281298 | orchestrator | 2025-09-08 01:14:39 | INFO  | It takes a moment until task 8fbbb8d9-f1a0-4f97-9518-3bb632ca1a86 (flavor-manager) has been started and output is visible here. 2025-09-08 01:14:43.009073 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-08 01:14:43.009196 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-08 01:14:43.009221 | orchestrator | │ in run │ 2025-09-08 01:14:43.009234 | orchestrator | │ │ 2025-09-08 01:14:43.009246 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-08 01:14:43.009267 | orchestrator | │ 192 │ │ 2025-09-08 01:14:43.009279 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-08 01:14:43.009291 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-08 01:14:43.009302 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-08 01:14:43.009314 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-08 01:14:43.009325 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-08 01:14:43.009336 | orchestrator | │ │ 2025-09-08 01:14:43.009348 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-08 01:14:43.009371 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-08 01:14:43.009382 | orchestrator | │ │ debug = False │ │ 2025-09-08 01:14:43.009393 | orchestrator | │ │ definitions = { │ │ 2025-09-08 01:14:43.009404 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-08 01:14:43.009415 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-08 01:14:43.009426 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-08 01:14:43.009437 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-08 01:14:43.009448 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-08 01:14:43.009459 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-08 01:14:43.009471 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-08 01:14:43.009482 | orchestrator | │ │ │ ], │ │ 2025-09-08 01:14:43.009492 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-08 01:14:43.009503 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.009515 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-08 01:14:43.009547 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.009559 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-08 01:14:43.009570 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.009581 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-08 01:14:43.009592 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.009603 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-08 01:14:43.009614 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-08 01:14:43.009625 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.009636 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.009647 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.009658 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-08 01:14:43.009668 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.009679 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-08 01:14:43.009690 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-08 01:14:43.009701 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-08 01:14:43.009730 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.009741 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-08 01:14:43.009752 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-08 01:14:43.009763 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.009774 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.009785 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.009796 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-08 01:14:43.009811 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.009823 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-08 01:14:43.009833 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.009844 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.009856 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.009867 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-08 01:14:43.009878 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-08 01:14:43.009889 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.009900 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.009911 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.009921 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-08 01:14:43.009933 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.009952 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-08 01:14:43.009963 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-08 01:14:43.009974 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.009985 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.009996 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-08 01:14:43.010007 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-08 01:14:43.010061 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.010073 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.010085 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.010096 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-08 01:14:43.010108 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.010119 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.010131 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.010142 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.010153 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.010190 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-08 01:14:43.010202 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-08 01:14:43.010213 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.010224 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.010235 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.010246 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-08 01:14:43.010257 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.010273 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.010285 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-08 01:14:43.010305 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.037343 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.037418 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-08 01:14:43.037435 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-08 01:14:43.037447 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.037459 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.037470 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.037481 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-08 01:14:43.037492 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.037522 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-08 01:14:43.037536 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.037547 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.037558 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.037569 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-08 01:14:43.037580 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-08 01:14:43.037591 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.037602 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.037613 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.037624 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-08 01:14:43.037635 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.037646 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-08 01:14:43.037657 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-08 01:14:43.037668 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.037680 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.037691 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-08 01:14:43.037702 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-08 01:14:43.037713 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.037724 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.037735 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.037746 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-08 01:14:43.037757 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-08 01:14:43.037772 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.037784 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.037795 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.037806 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.037817 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-08 01:14:43.037829 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-08 01:14:43.037840 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.037863 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.037874 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.037885 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-08 01:14:43.037896 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-08 01:14:43.037914 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.037926 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-08 01:14:43.037954 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.037966 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.037977 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-08 01:14:43.037989 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-08 01:14:43.038000 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.038012 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.038058 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-08 01:14:43.038070 | orchestrator | │ │ │ ] │ │ 2025-09-08 01:14:43.038081 | orchestrator | │ │ } │ │ 2025-09-08 01:14:43.038092 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-08 01:14:43.038103 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-08 01:14:43.038115 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-08 01:14:43.038125 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-08 01:14:43.038137 | orchestrator | │ │ name = 'local' │ │ 2025-09-08 01:14:43.038148 | orchestrator | │ │ recommended = True │ │ 2025-09-08 01:14:43.038184 | orchestrator | │ │ url = None │ │ 2025-09-08 01:14:43.038197 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-08 01:14:43.038210 | orchestrator | │ │ 2025-09-08 01:14:43.038221 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-08 01:14:43.038232 | orchestrator | │ in __init__ │ 2025-09-08 01:14:43.038243 | orchestrator | │ │ 2025-09-08 01:14:43.038254 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-08 01:14:43.038265 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-08 01:14:43.038276 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-08 01:14:43.038287 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-08 01:14:43.038298 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-08 01:14:43.038308 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-08 01:14:43.038319 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-08 01:14:43.038330 | orchestrator | │ │ 2025-09-08 01:14:43.038347 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-08 01:14:43.038368 | orchestrator | │ │ cloud = │ │ 2025-09-08 01:14:43.038390 | orchestrator | │ │ definitions = { │ │ 2025-09-08 01:14:43.038401 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-08 01:14:43.038412 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-08 01:14:43.038423 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-08 01:14:43.038434 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-08 01:14:43.038445 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-08 01:14:43.038456 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-08 01:14:43.038467 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-08 01:14:43.038478 | orchestrator | │ │ │ ], │ │ 2025-09-08 01:14:43.038489 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-08 01:14:43.038507 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.064475 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-08 01:14:43.064507 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.064519 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-08 01:14:43.064530 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.064541 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-08 01:14:43.064551 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.064562 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-08 01:14:43.064573 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-08 01:14:43.064585 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.064596 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.064607 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.064618 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-08 01:14:43.064629 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.064640 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-08 01:14:43.064651 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-08 01:14:43.064662 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-08 01:14:43.064673 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.064684 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-08 01:14:43.064695 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-08 01:14:43.064706 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.064728 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.064740 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.064750 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-08 01:14:43.064761 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.064772 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-08 01:14:43.064783 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.064794 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.064805 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.064816 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-08 01:14:43.064826 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-08 01:14:43.064837 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.064848 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.064866 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.064877 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-08 01:14:43.064888 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.064899 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-08 01:14:43.064910 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-08 01:14:43.064921 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.064932 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.064943 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-08 01:14:43.064953 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-08 01:14:43.064964 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.064975 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.064995 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.065006 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-08 01:14:43.065017 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.065028 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.065039 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.065050 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.065061 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.065072 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-08 01:14:43.065083 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-08 01:14:43.065094 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.065111 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.065122 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.065133 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-08 01:14:43.065144 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.065154 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.065185 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-08 01:14:43.065197 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.065208 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.065219 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-08 01:14:43.065230 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-08 01:14:43.065240 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.065252 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.065262 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.065273 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-08 01:14:43.065284 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.065295 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-08 01:14:43.065306 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.065317 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.065329 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.065340 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-08 01:14:43.065351 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-08 01:14:43.065362 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.065373 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.065384 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.065396 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-08 01:14:43.065407 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-08 01:14:43.065418 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-08 01:14:43.065429 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-08 01:14:43.065440 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.065451 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.065462 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-08 01:14:43.065473 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-08 01:14:43.065485 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.065507 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.141706 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.141745 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-08 01:14:43.141771 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-08 01:14:43.141784 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.141795 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-08 01:14:43.141806 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.141817 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.141828 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-08 01:14:43.141840 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-08 01:14:43.141851 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.141862 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.141873 | orchestrator | │ │ │ │ { │ │ 2025-09-08 01:14:43.141884 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-08 01:14:43.141895 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-08 01:14:43.141906 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-08 01:14:43.141918 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-08 01:14:43.141929 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-08 01:14:43.141940 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-08 01:14:43.141951 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-08 01:14:43.141962 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-08 01:14:43.141973 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-08 01:14:43.141985 | orchestrator | │ │ │ │ }, │ │ 2025-09-08 01:14:43.141996 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-08 01:14:43.142007 | orchestrator | │ │ │ ] │ │ 2025-09-08 01:14:43.142047 | orchestrator | │ │ } │ │ 2025-09-08 01:14:43.142059 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-08 01:14:43.142070 | orchestrator | │ │ recommended = True │ │ 2025-09-08 01:14:43.142081 | orchestrator | │ │ self = │ │ 2025-09-08 01:14:43.142104 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-08 01:14:43.142118 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-08 01:14:43.142141 | orchestrator | KeyError: 'recommended' 2025-09-08 01:14:43.573410 | orchestrator | ERROR 2025-09-08 01:14:43.573645 | orchestrator | { 2025-09-08 01:14:43.573686 | orchestrator | "delta": "0:00:09.030427", 2025-09-08 01:14:43.573713 | orchestrator | "end": "2025-09-08 01:14:43.445758", 2025-09-08 01:14:43.573736 | orchestrator | "msg": "non-zero return code", 2025-09-08 01:14:43.573756 | orchestrator | "rc": 1, 2025-09-08 01:14:43.573776 | orchestrator | "start": "2025-09-08 01:14:34.415331" 2025-09-08 01:14:43.573795 | orchestrator | } failure 2025-09-08 01:14:43.585334 | 2025-09-08 01:14:43.585467 | PLAY RECAP 2025-09-08 01:14:43.585551 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-08 01:14:43.585586 | 2025-09-08 01:14:43.797561 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-08 01:14:43.800294 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-08 01:14:44.563908 | 2025-09-08 01:14:44.564088 | PLAY [Post output play] 2025-09-08 01:14:44.580166 | 2025-09-08 01:14:44.580292 | LOOP [stage-output : Register sources] 2025-09-08 01:14:44.642231 | 2025-09-08 01:14:44.642562 | TASK [stage-output : Check sudo] 2025-09-08 01:14:45.602887 | orchestrator | sudo: a password is required 2025-09-08 01:14:45.682906 | orchestrator | ok: Runtime: 0:00:00.128583 2025-09-08 01:14:45.698260 | 2025-09-08 01:14:45.698423 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-08 01:14:45.737151 | 2025-09-08 01:14:45.737428 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-08 01:14:45.806977 | orchestrator | ok 2025-09-08 01:14:45.816149 | 2025-09-08 01:14:45.816301 | LOOP [stage-output : Ensure target folders exist] 2025-09-08 01:14:46.232300 | orchestrator | ok: "docs" 2025-09-08 01:14:46.232634 | 2025-09-08 01:14:46.460817 | orchestrator | ok: "artifacts" 2025-09-08 01:14:46.661945 | orchestrator | ok: "logs" 2025-09-08 01:14:46.680100 | 2025-09-08 01:14:46.680273 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-08 01:14:46.717214 | 2025-09-08 01:14:46.717501 | TASK [stage-output : Make all log files readable] 2025-09-08 01:14:46.981760 | orchestrator | ok 2025-09-08 01:14:46.988399 | 2025-09-08 01:14:46.988510 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-08 01:14:47.023118 | orchestrator | skipping: Conditional result was False 2025-09-08 01:14:47.035060 | 2025-09-08 01:14:47.035196 | TASK [stage-output : Discover log files for compression] 2025-09-08 01:14:47.059209 | orchestrator | skipping: Conditional result was False 2025-09-08 01:14:47.071771 | 2025-09-08 01:14:47.071926 | LOOP [stage-output : Archive everything from logs] 2025-09-08 01:14:47.114482 | 2025-09-08 01:14:47.114620 | PLAY [Post cleanup play] 2025-09-08 01:14:47.122134 | 2025-09-08 01:14:47.122232 | TASK [Set cloud fact (Zuul deployment)] 2025-09-08 01:14:47.182155 | orchestrator | ok 2025-09-08 01:14:47.190550 | 2025-09-08 01:14:47.190653 | TASK [Set cloud fact (local deployment)] 2025-09-08 01:14:47.213780 | orchestrator | skipping: Conditional result was False 2025-09-08 01:14:47.222932 | 2025-09-08 01:14:47.223046 | TASK [Clean the cloud environment] 2025-09-08 01:14:47.749758 | orchestrator | 2025-09-08 01:14:47 - clean up servers 2025-09-08 01:14:48.465702 | orchestrator | 2025-09-08 01:14:48 - testbed-manager 2025-09-08 01:14:48.546604 | orchestrator | 2025-09-08 01:14:48 - testbed-node-2 2025-09-08 01:14:48.626937 | orchestrator | 2025-09-08 01:14:48 - testbed-node-0 2025-09-08 01:14:48.710135 | orchestrator | 2025-09-08 01:14:48 - testbed-node-4 2025-09-08 01:14:48.811716 | orchestrator | 2025-09-08 01:14:48 - testbed-node-3 2025-09-08 01:14:48.899828 | orchestrator | 2025-09-08 01:14:48 - testbed-node-1 2025-09-08 01:14:48.992324 | orchestrator | 2025-09-08 01:14:48 - testbed-node-5 2025-09-08 01:14:49.078721 | orchestrator | 2025-09-08 01:14:49 - clean up keypairs 2025-09-08 01:14:49.094853 | orchestrator | 2025-09-08 01:14:49 - testbed 2025-09-08 01:14:49.117286 | orchestrator | 2025-09-08 01:14:49 - wait for servers to be gone 2025-09-08 01:15:00.220771 | orchestrator | 2025-09-08 01:15:00 - clean up ports 2025-09-08 01:15:00.400790 | orchestrator | 2025-09-08 01:15:00 - 2e8c84a4-ad65-4e2f-a458-6f490a770a54 2025-09-08 01:15:00.659573 | orchestrator | 2025-09-08 01:15:00 - 3a6c2b5a-3fb9-48d7-9f14-7db3ae2b07c6 2025-09-08 01:15:00.916580 | orchestrator | 2025-09-08 01:15:00 - 41e7f526-5170-4059-8f96-80668bf02b09 2025-09-08 01:15:01.168225 | orchestrator | 2025-09-08 01:15:01 - 87e4445c-affc-4da5-a392-0d55edb1861d 2025-09-08 01:15:01.556375 | orchestrator | 2025-09-08 01:15:01 - 93da8d7c-016f-4930-a987-38ba4452b2c5 2025-09-08 01:15:01.760012 | orchestrator | 2025-09-08 01:15:01 - ac9a5a82-2c38-42e2-a66e-767fcbdd0623 2025-09-08 01:15:01.976698 | orchestrator | 2025-09-08 01:15:01 - fded8601-6d22-4287-b0b4-c0c6b716401c 2025-09-08 01:15:02.185467 | orchestrator | 2025-09-08 01:15:02 - clean up volumes 2025-09-08 01:15:02.306844 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-5-node-base 2025-09-08 01:15:02.348010 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-manager-base 2025-09-08 01:15:02.386068 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-4-node-base 2025-09-08 01:15:02.426266 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-3-node-base 2025-09-08 01:15:02.470243 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-2-node-base 2025-09-08 01:15:02.510797 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-0-node-base 2025-09-08 01:15:02.550968 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-1-node-4 2025-09-08 01:15:02.597973 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-4-node-4 2025-09-08 01:15:02.644467 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-1-node-base 2025-09-08 01:15:02.688572 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-5-node-5 2025-09-08 01:15:02.730526 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-3-node-3 2025-09-08 01:15:02.772015 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-2-node-5 2025-09-08 01:15:02.815273 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-0-node-3 2025-09-08 01:15:02.859168 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-8-node-5 2025-09-08 01:15:02.903566 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-7-node-4 2025-09-08 01:15:02.942084 | orchestrator | 2025-09-08 01:15:02 - testbed-volume-6-node-3 2025-09-08 01:15:02.984697 | orchestrator | 2025-09-08 01:15:02 - disconnect routers 2025-09-08 01:15:03.107228 | orchestrator | 2025-09-08 01:15:03 - testbed 2025-09-08 01:15:04.119800 | orchestrator | 2025-09-08 01:15:04 - clean up subnets 2025-09-08 01:15:04.175550 | orchestrator | 2025-09-08 01:15:04 - subnet-testbed-management 2025-09-08 01:15:04.370308 | orchestrator | 2025-09-08 01:15:04 - clean up networks 2025-09-08 01:15:04.556746 | orchestrator | 2025-09-08 01:15:04 - net-testbed-management 2025-09-08 01:15:04.862319 | orchestrator | 2025-09-08 01:15:04 - clean up security groups 2025-09-08 01:15:04.909126 | orchestrator | 2025-09-08 01:15:04 - testbed-node 2025-09-08 01:15:05.025731 | orchestrator | 2025-09-08 01:15:05 - testbed-management 2025-09-08 01:15:05.152027 | orchestrator | 2025-09-08 01:15:05 - clean up floating ips 2025-09-08 01:15:05.185135 | orchestrator | 2025-09-08 01:15:05 - 81.163.192.100 2025-09-08 01:15:05.523644 | orchestrator | 2025-09-08 01:15:05 - clean up routers 2025-09-08 01:15:05.635935 | orchestrator | 2025-09-08 01:15:05 - testbed 2025-09-08 01:15:07.273393 | orchestrator | ok: Runtime: 0:00:19.555088 2025-09-08 01:15:07.277563 | 2025-09-08 01:15:07.277719 | PLAY RECAP 2025-09-08 01:15:07.277847 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-08 01:15:07.277943 | 2025-09-08 01:15:07.410434 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-08 01:15:07.412992 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-08 01:15:08.160933 | 2025-09-08 01:15:08.161113 | PLAY [Cleanup play] 2025-09-08 01:15:08.177551 | 2025-09-08 01:15:08.177705 | TASK [Set cloud fact (Zuul deployment)] 2025-09-08 01:15:08.232654 | orchestrator | ok 2025-09-08 01:15:08.240050 | 2025-09-08 01:15:08.240190 | TASK [Set cloud fact (local deployment)] 2025-09-08 01:15:08.275412 | orchestrator | skipping: Conditional result was False 2025-09-08 01:15:08.292136 | 2025-09-08 01:15:08.292299 | TASK [Clean the cloud environment] 2025-09-08 01:15:09.378135 | orchestrator | 2025-09-08 01:15:09 - clean up servers 2025-09-08 01:15:09.922493 | orchestrator | 2025-09-08 01:15:09 - clean up keypairs 2025-09-08 01:15:09.941097 | orchestrator | 2025-09-08 01:15:09 - wait for servers to be gone 2025-09-08 01:15:09.981013 | orchestrator | 2025-09-08 01:15:09 - clean up ports 2025-09-08 01:15:10.054616 | orchestrator | 2025-09-08 01:15:10 - clean up volumes 2025-09-08 01:15:10.112603 | orchestrator | 2025-09-08 01:15:10 - disconnect routers 2025-09-08 01:15:10.138792 | orchestrator | 2025-09-08 01:15:10 - clean up subnets 2025-09-08 01:15:10.157590 | orchestrator | 2025-09-08 01:15:10 - clean up networks 2025-09-08 01:15:10.318736 | orchestrator | 2025-09-08 01:15:10 - clean up security groups 2025-09-08 01:15:10.355388 | orchestrator | 2025-09-08 01:15:10 - clean up floating ips 2025-09-08 01:15:10.383816 | orchestrator | 2025-09-08 01:15:10 - clean up routers 2025-09-08 01:15:10.835651 | orchestrator | ok: Runtime: 0:00:01.336897 2025-09-08 01:15:10.839927 | 2025-09-08 01:15:10.840105 | PLAY RECAP 2025-09-08 01:15:10.840230 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-08 01:15:10.840293 | 2025-09-08 01:15:10.965133 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-08 01:15:10.967570 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-08 01:15:11.725921 | 2025-09-08 01:15:11.726083 | PLAY [Base post-fetch] 2025-09-08 01:15:11.741380 | 2025-09-08 01:15:11.741506 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-08 01:15:11.807531 | orchestrator | skipping: Conditional result was False 2025-09-08 01:15:11.824696 | 2025-09-08 01:15:11.825073 | TASK [fetch-output : Set log path for single node] 2025-09-08 01:15:11.886098 | orchestrator | ok 2025-09-08 01:15:11.895701 | 2025-09-08 01:15:11.895859 | LOOP [fetch-output : Ensure local output dirs] 2025-09-08 01:15:12.413169 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/work/logs" 2025-09-08 01:15:12.678704 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/work/artifacts" 2025-09-08 01:15:12.944660 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8b726dec01db4467b34e0a590ec8733d/work/docs" 2025-09-08 01:15:12.969164 | 2025-09-08 01:15:12.969356 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-08 01:15:13.950148 | orchestrator | changed: .d..t...... ./ 2025-09-08 01:15:13.950463 | orchestrator | changed: All items complete 2025-09-08 01:15:13.950503 | 2025-09-08 01:15:14.675618 | orchestrator | changed: .d..t...... ./ 2025-09-08 01:15:15.394268 | orchestrator | changed: .d..t...... ./ 2025-09-08 01:15:15.423518 | 2025-09-08 01:15:15.423674 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-08 01:15:15.462480 | orchestrator | skipping: Conditional result was False 2025-09-08 01:15:15.465101 | orchestrator | skipping: Conditional result was False 2025-09-08 01:15:15.487045 | 2025-09-08 01:15:15.487157 | PLAY RECAP 2025-09-08 01:15:15.487230 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-08 01:15:15.487267 | 2025-09-08 01:15:15.638908 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-08 01:15:15.641305 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-08 01:15:16.432821 | 2025-09-08 01:15:16.433064 | PLAY [Base post] 2025-09-08 01:15:16.449682 | 2025-09-08 01:15:16.449909 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-08 01:15:17.410353 | orchestrator | changed 2025-09-08 01:15:17.419326 | 2025-09-08 01:15:17.419454 | PLAY RECAP 2025-09-08 01:15:17.419526 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-08 01:15:17.419600 | 2025-09-08 01:15:17.555368 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-08 01:15:17.558305 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-08 01:15:18.381387 | 2025-09-08 01:15:18.381588 | PLAY [Base post-logs] 2025-09-08 01:15:18.392646 | 2025-09-08 01:15:18.392796 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-08 01:15:18.878739 | localhost | changed 2025-09-08 01:15:18.895419 | 2025-09-08 01:15:18.895588 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-08 01:15:18.935737 | localhost | ok 2025-09-08 01:15:18.942713 | 2025-09-08 01:15:18.942957 | TASK [Set zuul-log-path fact] 2025-09-08 01:15:18.973188 | localhost | ok 2025-09-08 01:15:18.988976 | 2025-09-08 01:15:18.989123 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-08 01:15:19.028440 | localhost | ok 2025-09-08 01:15:19.035096 | 2025-09-08 01:15:19.035275 | TASK [upload-logs : Create log directories] 2025-09-08 01:15:19.544139 | localhost | changed 2025-09-08 01:15:19.547103 | 2025-09-08 01:15:19.547212 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-08 01:15:20.060226 | localhost -> localhost | ok: Runtime: 0:00:00.007217 2025-09-08 01:15:20.068757 | 2025-09-08 01:15:20.068944 | TASK [upload-logs : Upload logs to log server] 2025-09-08 01:15:20.660766 | localhost | Output suppressed because no_log was given 2025-09-08 01:15:20.664953 | 2025-09-08 01:15:20.665143 | LOOP [upload-logs : Compress console log and json output] 2025-09-08 01:15:20.728133 | localhost | skipping: Conditional result was False 2025-09-08 01:15:20.733155 | localhost | skipping: Conditional result was False 2025-09-08 01:15:20.740536 | 2025-09-08 01:15:20.740751 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-08 01:15:20.789440 | localhost | skipping: Conditional result was False 2025-09-08 01:15:20.790128 | 2025-09-08 01:15:20.793387 | localhost | skipping: Conditional result was False 2025-09-08 01:15:20.807714 | 2025-09-08 01:15:20.808043 | LOOP [upload-logs : Upload console log and json output]